The norm has seemed to be 4KB for "modern times" (for some value
of "modern"). There have been devices which ventured to smaller
sizes (e.g., 1KB "tiny" pages) as well as larger (e.g., 1M/16M
"sections").
In days of old, there was much more variety in page sizes (e.g.,
Alpha supported 8/16/32/64KB). As well as memory management
mechanisms (e.g., the 645's segments-over-pages).
The use of huge pages (e.g., sections) seems to just be an
efficiency hack as it sidesteps one of the big advantages of paged
memory: defining protection domains for individual "objects"
(so, you're willing to claim that 16MB object needs no protection
from any of its own components?)
This means only the smaller pages are effective tools to enhance
reliability, security, value-added, etc.
And, 4KB seems to be the only real offering, there (the "tiny"
page offerings are obsolescent).
[A smarter choice would have been to elide the 4K in favor of 1K]
Any opinions on where page sizes may settle out? Are we stuck with
yet another single solution to this problem -- sadly based on >desktop/mainframe environments (when most of the code being written
runs AWAY from such environments)?
On Sun, 11 Aug 2024 14:26:20 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
The norm has seemed to be 4KB for "modern times" (for some value
of "modern"). There have been devices which ventured to smaller
sizes (e.g., 1KB "tiny" pages) as well as larger (e.g., 1M/16M
"sections").
In days of old, there was much more variety in page sizes (e.g.,
Alpha supported 8/16/32/64KB). As well as memory management
mechanisms (e.g., the 645's segments-over-pages).
The use of huge pages (e.g., sections) seems to just be an
efficiency hack as it sidesteps one of the big advantages of paged
memory: defining protection domains for individual "objects"
(so, you're willing to claim that 16MB object needs no protection
from any of its own components?)
This means only the smaller pages are effective tools to enhance
reliability, security, value-added, etc.
And, 4KB seems to be the only real offering, there (the "tiny"
page offerings are obsolescent).
[A smarter choice would have been to elide the 4K in favor of 1K]
Any opinions on where page sizes may settle out? Are we stuck with
yet another single solution to this problem -- sadly based on
desktop/mainframe environments (when most of the code being written
runs AWAY from such environments)?
Memory-mapping page sizes have always been optimized using some
complicated and evolving function of hardware cost and the required
latency, throughput, and maximum supported memory. It will never make
sense over all applications, but has been converging slowly.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 415 |
Nodes: | 16 (2 / 14) |
Uptime: | 92:46:20 |
Calls: | 8,690 |
Calls today: | 5 |
Files: | 13,250 |
Messages: | 5,947,019 |