"R.Wieser" :
Hello All,
I'm switching to Ortho mode pretty-much the standard way: from setting the viewport thru setting the projection-matrix mode, setting identity,
calling
glOrtho, switching back to model-matrix mode and setting identity here too (only difference, I've defined it vertically upside-down, placing the
origin
in the upper-left).
For the arguments for both of the above I used 0,0 and the with and height returned by the WM_SIZE event.
After that I did draw a 2D rectangle using a lineloop, with its corners at 0,0 and 128,128.
** The problem is that the top and left lines of that square lineloop are
not visible, where I think they should be. What is going on here ?
When I draw the same rectangle starting from 1,1 I do see the upper and
left
sides, but they are then laying directly against the inner sides of the control (the OpenGL canvas is displayed in).
Regards,
Rudy Wieser
Thanks, but that code and image does not show the problem
Have you ever tried to use 0,0 for those "x1" and "y1" arguments ?
If you did, did you still see the upper and left border lines ?
(or lower and left if you didn't invert the vertical axis like I did)
(snip)
Also, I've got a problem with the coordinates of the transparent
rectangle:
its "x1" and "y2" are the same as when you draw the boundary, meaning it *should* overlap at least two of those border lines. The odd thing is,
your image does not show such overlapping ...
Rudy, if this can help you, the following code displays
a rectangle with border and partially transparent bgnd
the"R.Wieser" :
Hello All,
I'm switching to Ortho mode pretty-much the standard way: from setting
tooviewport thru setting the projection-matrix mode, setting identity,
calling
glOrtho, switching back to model-matrix mode and setting identity here
height(only difference, I've defined it vertically upside-down, placing the origin
in the upper-left).
For the arguments for both of the above I used 0,0 and the with and
atreturned by the WM_SIZE event.
After that I did draw a 2D rectangle using a lineloop, with its corners
are0,0 and 128,128.
** The problem is that the top and left lines of that square lineloop
trensparencynot visible, where I think they should be. What is going on here ?
When I draw the same rectangle starting from 1,1 I do see the upper and left
sides, but they are then laying directly against the inner sides of the control (the OpenGL canvas is displayed in).
Regards,
Rudy Wieser
Rudy, if this can help you, the following code displays
a rectangle with border and partially transparent bgnd
( image below, located at the top left of the screen ) http://www.cjoint.com/data/EIiqQXEYaTh_0.jpg
I didn't try to draw the border _after_ drawing the bgnd,
anyway note the 'y1+1' and the 'x2-1' bgnd here ...
void fctDisplayRectBgnd( int x1, int y1, int x2, int y2 )
{
glColor4f( 1.0f, 1.0f, 1.0f, 1.0f ); // border = white
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glRecti( x1, y1, x2, y2 ); // border
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glColor4f( 0.3f, 0.3f, 0.3f, 0.5f ); // gray-ish bgnd, 0.5 =
glRecti( x1, y1+1, x2-1, y2 ); // background
glDisable(GL_BLEND);
}
// HTH
A bit more google-fu lead me to whats probably the answer to that: OpenGL tries to draw, in Ortho mode, its pixels *between* the pixels of the
screen.
In short, It looks like my problem started because I was simply not aware
of OpenGLs method of defining its pixel positions as being on the intersections of the screen-pixel grid instead of in the gridboxes
themselves (where the screen/videocard places them).
Well then, there's no problem anymore, is it ? :-)
Well, thank you for asking me to do it for you.
x1=0 will cause the left vertical line to disappear,
while y1=0 will NOT cause the upper horizontal line to disappear,
and of course the two other lines stays the same as before.
Since I can put the whole box and its text at any given (x,y)
position anywhere on the screen, I've got what I wanted,
For the border rectangle drawn by glRecti( x1, y1, x2, y2 )
the drawing actually starts at x1 and ends at y2.
For the bgdn rectangle drawn by glRecti( x1, y1+1, x2-1, y2 )
the drawing actually starts at x1+1 and ends at y2-1.
Rudy,
Thanks, but that code and image does not show the problem
Well then, there's no problem anymore, is it ? :-)
Have you ever tried to use 0,0 for those "x1" and "y1" arguments ?
If you did, did you still see the upper and left border lines ?
(or lower and left if you didn't invert the vertical axis like I did)
Well, thank you for asking me to do it for you.
x1=0 will cause the left vertical line to disappear, while
y1=0 will NOT cause the upper horizontal line to disappear,
and of course the two other lines stays the same as before.
Since I can put the whole box and its text at any given (x,y)
position anywhere on the screen, I've got what I wanted,
n'I like to let some pixels around things to let them 'breathe'.
(snip)
Also, I've got a problem with the coordinates of the transparent
rectangle:
its "x1" and "y2" are the same as when you draw the boundary, meaning it *should* overlap at least two of those border lines. The odd thing is,
your image does not show such overlapping ...
Yes, and I tried several different 'side-effect' values to check it out, while setting the backgroud to 'totally opaque' to be able to see
if it overlaps the border or not.
For the border rectangle drawn by glRecti( x1, y1, x2, y2 )
the drawing actually starts at x1 and ends at y2.
For the bgdn rectangle drawn by glRecti( x1, y1+1, x2-1, y2 )
the drawing actually starts at x1+1 and ends at y2-1.
Anyway you can reverse order : draw bgnd first, then the border.
Cheers
A projection matrix has nothing to do with pixels.
You appear to have it backwards.
If you rasterise a horizontal or vertical line whose endpoint[snip]
On Thu, 10 Sep 2015 21:45:38 +0200, R.Wieser wrote:OpenGL
A bit more google-fu lead me to whats probably the answer to that:
awaretries to draw, in Ortho mode, its pixels *between* the pixels of the screen.
A projection matrix has nothing to do with pixels. It transforms
between coordinate systems. The matrices generated by glOrtho and
gluOrtho2D map the given rectangle to the bounds of the unit cube in normalised device coordinates, which are in turn mapped by the viewport transformation to the bounds of the viewport (note: *bounds*, not to the centres of the boundary rows and columns of fragments).
In short, It looks like my problem started because I was simply not
of OpenGLs method of defining its pixel positions as being on the intersections of the screen-pixel grid instead of in the gridboxes themselves (where the screen/videocard places them).
You appear to have it backwards.
A fragment (pixel) doesn't have a specific position. It's not a point,
it's a rectangle. Its corners have specific positions, as does its centre.
E.g., the lower-left corner of the fragment at the lower-left corner of
the window is <0,0> in window coordinates. The upper-left corner of
that fragment is <1,1>, and its centre is <1/2,1/2>. For any fragment, the window coordinates of its corners will be integers, the window coordinates
of its centre will be <x+1/2,y+1/2> where x and y are integers.
When rasterising a line of width 1 without anti-alisasing, the fragments which are affected are those where the line passes through an inscribed diamond, i.e. the line must pass within a distance of half a fragment from the fragment's centre according to the Manhattan (aka taxicab) metric.
Figure 14.2 in section 14.5.1 of the OpenGL 4.5 specification makes this clear (the same figure exists in other versions, but the section numbers vary).
If you rasterise a horizontal or vertical line whose endpoint coordinates
are integers in window coordinates, the line will run exactly along the boundary between two rows or columns of fragments, and thus be exactly
half a fragment from the centres of the fragments on either side. The
result is that the choice of exactly which row or column of fragments is rasterised will be affected by even the smallest rounding error.
The issue isn't non-alignment, but alignment.
And it isn't ortho versus perspective
it's an issue of the specific case of trying to draw untransformed, orthogonal single pixel lines versus ... pretty much everything else.
It works how it does because it makes sense.
Fudging the math .... Which is rarely the case.
That's like saying a coin toss which landed on tails "should have"
landed on heads.
For typical OpenGL usage, this isn't a problem....
That situation only happens when you go out of your way
to create it.
Which is why OpenGL doesn't try to do it. Being predictable,
consistent and rational are more important than being "intuitive".
And as time goes on, this issue will become progressively less
relevant. Even at 1920x1080 (which is about the lowest-
resolution monitor you can buy nowadays), single-pixel lines
are barely visible.
On Sat, 12 Sep 2015 14:36:08 +0200, R.Wieser wrote:projection,
All-in-all, you're using a lot more words to say what I already did: the reason OpenGLs pixels sometimes* do not get placed where you think they should be is because of a non-alignment of OpenGLs and the physical screen-pixel grid. That non-alignment might be good for a 3D
Twobut obviously not when using Ortho. So why is it still done that way ?
The issue isn't non-alignment, but alignment. And it isn't ortho versus perspective, it's an issue of the specific case of trying to draw untransformed, orthogonal single pixel lines versus ... pretty much everything else.
It works how it does because it makes sense. The fact that drawing single-pixel orthogonal lines means that you have to think about
what you're actually doing doesn't detract from that.
Fudging the math to make that particular case easier would only make sense
if that particular case was common, and it isn't. It only applies if the primitives being rendered are being positioned based upon the screen's
pixel grid, and the overall combination of the model-view, projection and viewport transformations happens to be an identity transformation. Which
is rarely the case.
*One "fun" thing I observed was when I, in ortho mode, did draw a line
loop and than a quad ontop of it using the same integer coordinates.
thatline-loop sides peeped out from under the quad. And pardon me, but
differentshould *not* have happened.
That's like saying a coin toss which landed on tails "should have" landed
on heads.
For a filled primitive, having the edges lie exactly mid-way between two
rows of pixels is the ideal case, because it's clear-cut which pixels are inside and which pixels are outside. For a line, it's the worst possible case, as each set of pixels has exactly the same strength of claim to
being on the line.
For typical OpenGL usage, this isn't a problem. By the time that the 3D coordinates have been subjected to the model-view and projection transformations, projective division, and the viewport transformation, the chances of the resulting line being exactly betwen two rows of pixels are negligible. That situation only happens when you go out of your way
to create it.
The only "solution" I can think of is to give a odd-width line a
andtreatment than a quad, thereby aliviating differences between the line
the quad screen results. But I would thouroughly dislike that, as it
would be nothing more than a clumsy hack.
Which is why OpenGL doesn't try to do it. Being predictable, consistent
and rational are more important than being "intuitive".
And as time goes on, this issue will become progressively less relevant.
Even at 1920x1080 (which is about the lowest-resolution monitor you can
buy nowadays), single-pixel lines are barely visible. This is a
significant problem for legacy software which works in pixel coordinates rather than e.g. a fraction of the window size. And as "4K" displays (4096x2160 or 3840x2160) get cheaper, they'll start becoming the de facto standard for PC monitors.
All-in-all, you're using a lot more words to say what I already did: the reason OpenGLs pixels sometimes* do not get placed where you think they should be is because of a non-alignment of OpenGLs and the physical screen-pixel grid. That non-alignment might be good for a 3D projection,
but obviously not when using Ortho. So why is it still done that way ?
*One "fun" thing I observed was when I, in ortho mode, did draw a line
loop and than a quad ontop of it using the same integer coordinates. Two line-loop sides peeped out from under the quad. And pardon me, but that should *not* have happened.
The only "solution" I can think of is to give a odd-width line a different treatment than a quad, thereby aliviating differences between the line and the quad screen results. But I would thouroughly dislike that, as it
would be nothing more than a clumsy hack.
*especially* to someone who is determined to ignore
anything which they don't want to hear.
Look, I'm not going to explain the entire OpenGL pipeline in detail,
But know this: you will experience exactly the same issue
with every other modern rendering [snip]
On Mon, 14 Sep 2015 12:51:51 +0200, R.Wieser wrote:
Look, I'm not going to explain the entire OpenGL pipeline in detail, *especially* to someone who is determined to ignore anything which they
don't want to hear.
But know this: you will experience exactly the same issue with every
other modern rendering API (meaning: uses floating-point coordinates with
an arbitrary affine or projective transformation), whether 2D or 3D.
OpenGL, DirectX, Cairo, HTML5 Canvas, SVG, PostScript, PDF, ..., they all work the same way: rasterisation is based upon pixel centres while pixel edges are aligned to integer coordinates.
If you want an API designed around 1990s VGA hardware, use SDL.
*especially* to someone who is determined to ignore anything which they
don't want to hear.
I'm sorry, but all I hear from you is "you have to like what is as it is",
with *absolutily no explanation* why the, obviously creating problems,
method is so good.
So I will make it simple for you:
Why is, especially in ortho mode (where the Othto projections width and height matches the one of the viewport and thus the rectangle of pixels on the screen), an OpenGL (virtual) pixel placed 1) anywhere else than on the screen pixel 2) on the most awkward of mathematical places, where the slightest of rounding errors causes on-screen changes ?
As I said before: a pixel isn't a point, it's a rectangle. A pixel....
doesn't have a single, specific location.
The centre of the bottom-left pixel is at (0.5,0.5) in window
coordinates.
Thank you for that explanation. It also indicates a problem when the term "pixel" is used, as lthe ast time you metioned "rectangular pixels" I assumed, from the context, that you ment OpenGLs virtual ones, instead of
the screens physical ones.
But a question: If I would use the exact same explanation, but define the origin of the screen as the center* of the top-left pixel, with it ranging from -1/2 pixel to +1/2 pixel, would that make the explanation invalid ?
If so, why ? If not ....
* The position of a lamp (or most any single-point lightsource) is
normally defined as its center, not somewhere on its outside. Why would it
be different for a pixel ?
is that in the case where two polygons share a common edge, any pixel
along that edge will belong to exactly one of the two polygons.
I think you've here mentioned the reason why a quad drawn ontop of a
lineloop (ofcourse using the same coordinates for both) does not fully overlap the lineloop: because its *forced* to stop one pixel short of its left/bottom end-coordinates, so it will not overlap an eventual next one.
A good choice, but not mentioned anywhere and as such not expected. :-\
2. The reason for using the pixel's centre is that it is unbiased.
Using any other location would result in rasterised polygons exhibiting
a net shift whose magnitude depends upon the raster resolution.
Ehrmmm ... Although I think I understand what problem you are indicating here, wasn't the problem not that OpenGL is *not* using the pixels
center ?
for which the line intersects a diamond inscribed within the pixel
Yeah, I found that diamond too. Though I have to say that I do not
quite see how it, in a basic Ortho projection, would affect a
single-pixel width line drawn from the center of a physical pixel to the center of another physical pixel
because unless x is a power of two, x*(1/x) typically won't be exactly
1.0 when using floating-point arithmetic.
And thats *exactly* why an OpenGL pixel should *not* be placed, when
using a basic ortho projection and integer coordinates, on the border of
two physical ones.
unless you've managed to construct a case where you're consistently
hitting the discontinuity in the rounding function.
Yeah, funny that: I'm using the *most basic* of setups (ortho projection matching the viewport size, integer coordinates for any used vertex),
You don't have to like it, but if you want to use it, you
have to make an effort to understand how it works
rather than making assumptions which aren't actually true.
Your problems stem from making incorrect assumptions,
not from the the method.
As I said before: a pixel isn't a point, it's a rectangle.....
A pixel doesn't have a single, specific location.
The centre of the bottom-left pixel is at (0.5,0.5) in window coordinates.
is that in the case where two polygons share a common edge,
any pixel along that edge will belong to exactly one of the two
polygons.
2. The reason for using the pixel's centre is that it is unbiased.
Using any other location would result in rasterised polygons
exhibiting a net shift whose magnitude depends upon the raster
resolution.
for which the line intersects a diamond inscribed within the pixel
because unless x is a power of two, x*(1/x) typically won't
be exactly 1.0 when using floating-point arithmetic.
The overall error is sufficiently small that it shouldn't matter
in practice ...
unless you've managed to construct a case where you're
consistently hitting the discontinuity in the rounding function.
In short, this can be summarised with an allegory: a man goes
to the doctor, and says "Doctor, when I do this ... it hurts"; to
which the doctor replies "So stop doing it!".
On Tue, 15 Sep 2015 12:11:04 +0200, R.Wieser wrote:is",
*especially* to someone who is determined to ignore anything which they
don't want to hear.
I'm sorry, but all I hear from you is "you have to like what is as it
You don't have to like it, but if you want to use it, you have to make an effort to understand how it works, rather than making assumptions which aren't actually true.on
with *absolutily no explanation* why the, obviously creating problems, method is so good.
It's not an "obviously creating problems method". Your problems stem from making incorrect assumptions, not from the the method.
So I will make it simple for you:
Why is, especially in ortho mode (where the Othto projections width and height matches the one of the viewport and thus the rectangle of pixels
thethe screen), an OpenGL (virtual) pixel placed 1) anywhere else than on
thescreen pixel 2) on the most awkward of mathematical places, where the slightest of rounding errors causes on-screen changes ?
As I said before: a pixel isn't a point, it's a rectangle. A pixel
doesn't have a single, specific location.
To simplify matters, we can forget all about projections, and deal
directly in window coordinates. Let w and h be the width and height (respectively) of the window in pixels.
The bottom-left corner of the bottom-left pixel is at the bottom-left
corner of the window, which is (0,0) in window coordinates.
Similarly, the top-right corner of the top-right pixel is at the top-right corner of the window, which is (w,h) in window coordinates.
The top-right corner of the bottom-left pixel is at (1,1) in window coordinates.
The centre of the bottom-left pixel is at (0.5,0.5) in window coordinates.
In short, window coordinates map a rectangular region of the screen to a rectangular region of the Euclidean plane. That mapping is continuous
(i.e. defined over the reals, not the integers). Vertices (and other positions, e.g. the raster position used for bitmap operations) are not constrained to integer coordinates.
In theory, a line or polygon is an infinite set of points, a subset of
R^2. But a finite set of pixels cannot exactly represent those, so rasterisation has to produce an approximation which attempts to minimise
the difference between the ideal and the achievable.
In the absence of anti-aliasing, rasterising a filled polygon affects
exactly those pixels whose centres lie within the polygon (i.e. within the convex hull of the polygon's vertices).
There are other rules which could be used, but:
1. The reason for testing whether a specific point lies within the polygon (rather than e.g. whether *any* part of the pixel lies inside the polygon)
is that in the case where two polygons share a common edge, any pixel
along that edge will belong to exactly one of the two polygons. This is of fundamental importance when using read-modify-write operations such as stencilling or glLogicOp(GL_XOR).
2. The reason for using the pixel's centre is that it is unbiased. Using
any other location would result in rasterised polygons exhibiting a net
shift whose magnitude depends upon the raster resolution.
In the absence of anti-aliasing, rasterising a line segment of width one affects exactly those pixels for which the line intersects a diamond inscribed within the pixel, i.e. some point (x,y) on the line segment satisfies the constraint |x-xc|+|y-yc|<0.5 where (xc,yc) is the
coordinates of the pixel centre.
The main reasons for the diamond rule are a) that it can be efficiently implemented in hardware, and b) that it guarantees that lines do not
contain gaps.
And again, the reason for using the pixel's centre is that it is unbiased.
On average, half of the affected pixels will lie on each side of the line.
This all works fine in practice, except for one specific case: when you
draw a line which is either exactly horizontal (i.e. every point on the
line has the same Y coordinate) or exactly vertical (i.e. every point on
line has the same X coordinate), and the constant coordinate (X or Y)
happens to be exactly mid-way between pixel centres (i.e. the coordinate
is an integer).
In that case, the fact that it's unbiased on average doesn't help because
the fact that the perpendicular coordinate (X for a vertical line, Y for a horizontal line) is constant, combined with the deterministic nature of computer arithmetic, means that all of the pixels will end up falling on
the same side of the line.
Almost anything which involves rounding has pathological cases, and for
the rasterisation algorithm used by OpenGL (and almost everything else), vertical or horizontal lines with integer window coordinates are the pathological case. For filled polygons, vertical or horizontal edges whose coordinates are an integer plus 0.5 are the pathological case.
Adding in transformations makes matters slightly worse. If you set an orthographic projection which uses the window's dimensions in pixels, the combination of the (user-defined) orthographic projection and the
(built-in) viewport transformation should theoretically be an identity transformation. But in practice it will be slightly off because unless x
is a power of two, x*(1/x) typically won't be exactly 1.0 when using floating-point arithmetic.
The overall error is sufficiently small that it shouldn't matter in
practice ... unless you've managed to construct a case where you're consistently hitting the discontinuity in the rounding function.
In short, this can be summarised with an allegory: a man goes to the
doctor, and says "Doctor, when I do this ... it hurts"; to which the
doctor replies "So stop doing it!".
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 462 |
Nodes: | 16 (2 / 14) |
Uptime: | 83:16:32 |
Calls: | 9,374 |
Calls today: | 1 |
Files: | 13,552 |
Messages: | 6,089,301 |