Does anyone know why [font configure ... -size $x] only accepts integer sizes?
For negative (pixel) sizes, maybe that makes sense, but for positive (points) sizes it doesn't.
Is it something that could be changed, or are there hard intrinsic constraints?
On Tuesday, 19 October 2021 at 07:37:44 UTC+1, Donald Arseneau wrote:in tkFont.c, and possibly also TkFontParseXLFD() in the same file. I guess it didn't matter too much in the days when bitmapped fonts dominated and screens weren't very high resolution, but it does matter now. (It also seems that Windows's
Does anyone know why [font configure ... -size $x] only accepts integer sizes?
For negative (pixel) sizes, maybe that makes sense, but for positive
(points) sizes it doesn't.
Is it something that could be changed, or are there hard intrinsic constraints?
It was probably because XLFDs were integer-based, but there's no real reason why we have to stick to this rule; the TkFontAttributes structure itself uses a double size field. The critical change would be to ParseFontNameObj() and ConfigAttributesObj()
I'd guess this is fixable and would be really just a bug as the underlying model is right.
Donal.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 285 |
Nodes: | 16 (3 / 13) |
Uptime: | 31:26:05 |
Calls: | 6,449 |
Calls today: | 1 |
Files: | 12,052 |
Messages: | 5,254,716 |