• MMU page sizes

    From Don Y@21:1/5 to All on Sun Aug 11 14:26:20 2024
    The norm has seemed to be 4KB for "modern times" (for some value
    of "modern"). There have been devices which ventured to smaller
    sizes (e.g., 1KB "tiny" pages) as well as larger (e.g., 1M/16M
    "sections").

    In days of old, there was much more variety in page sizes (e.g.,
    Alpha supported 8/16/32/64KB). As well as memory management
    mechanisms (e.g., the 645's segments-over-pages).

    The use of huge pages (e.g., sections) seems to just be an
    efficiency hack as it sidesteps one of the big advantages of paged
    memory: defining protection domains for individual "objects"
    (so, you're willing to claim that 16MB object needs no protection
    from any of its own components?)

    This means only the smaller pages are effective tools to enhance
    reliability, security, value-added, etc.

    And, 4KB seems to be the only real offering, there (the "tiny"
    page offerings are obsolescent).

    [A smarter choice would have been to elide the 4K in favor of 1K]

    Any opinions on where page sizes may settle out? Are we stuck with
    yet another single solution to this problem -- sadly based on
    desktop/mainframe environments (when most of the code being written
    runs AWAY from such environments)?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Joe Gwinn@21:1/5 to blockedofcourse@foo.invalid on Sun Aug 11 18:13:44 2024
    On Sun, 11 Aug 2024 14:26:20 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    The norm has seemed to be 4KB for "modern times" (for some value
    of "modern"). There have been devices which ventured to smaller
    sizes (e.g., 1KB "tiny" pages) as well as larger (e.g., 1M/16M
    "sections").

    In days of old, there was much more variety in page sizes (e.g.,
    Alpha supported 8/16/32/64KB). As well as memory management
    mechanisms (e.g., the 645's segments-over-pages).

    The use of huge pages (e.g., sections) seems to just be an
    efficiency hack as it sidesteps one of the big advantages of paged
    memory: defining protection domains for individual "objects"
    (so, you're willing to claim that 16MB object needs no protection
    from any of its own components?)

    This means only the smaller pages are effective tools to enhance
    reliability, security, value-added, etc.

    And, 4KB seems to be the only real offering, there (the "tiny"
    page offerings are obsolescent).

    [A smarter choice would have been to elide the 4K in favor of 1K]

    Any opinions on where page sizes may settle out? Are we stuck with
    yet another single solution to this problem -- sadly based on >desktop/mainframe environments (when most of the code being written
    runs AWAY from such environments)?


    Memory-mapping page sizes have always been optimized using some
    complicated and evolving function of hardware cost and the required
    latency, throughput, and maximum supported memory. It will never make
    sense over all applications, but has been converging slowly.

    Joe Gwinn

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Joe Gwinn on Sun Aug 11 22:12:04 2024
    On 8/11/2024 3:13 PM, Joe Gwinn wrote:
    On Sun, 11 Aug 2024 14:26:20 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    The norm has seemed to be 4KB for "modern times" (for some value
    of "modern"). There have been devices which ventured to smaller
    sizes (e.g., 1KB "tiny" pages) as well as larger (e.g., 1M/16M
    "sections").

    In days of old, there was much more variety in page sizes (e.g.,
    Alpha supported 8/16/32/64KB). As well as memory management
    mechanisms (e.g., the 645's segments-over-pages).

    The use of huge pages (e.g., sections) seems to just be an
    efficiency hack as it sidesteps one of the big advantages of paged
    memory: defining protection domains for individual "objects"
    (so, you're willing to claim that 16MB object needs no protection
    from any of its own components?)

    This means only the smaller pages are effective tools to enhance
    reliability, security, value-added, etc.

    And, 4KB seems to be the only real offering, there (the "tiny"
    page offerings are obsolescent).

    [A smarter choice would have been to elide the 4K in favor of 1K]

    Any opinions on where page sizes may settle out? Are we stuck with
    yet another single solution to this problem -- sadly based on
    desktop/mainframe environments (when most of the code being written
    runs AWAY from such environments)?

    Memory-mapping page sizes have always been optimized using some
    complicated and evolving function of hardware cost and the required
    latency, throughput, and maximum supported memory. It will never make
    sense over all applications, but has been converging slowly.

    But, its a chicken/egg problem. People can't develop systems that exploit hardware capabilities that aren't available. And, conversely, people
    design with the constraints *imposed* by the hardware capabilities that
    ARE available.

    We *know* "keep things small", "compartmentalize", "information hiding",
    etc. are the mantras for reliable/maintainable code but the hardware
    encourages *huge*, single-threaded monolithic kernels, etc. (super-sections? really??)

    [Imagine the folks who naively put a Linux kernel in a product... and
    *hope* it works! (sure, your toaster NEEDS support for -- not just
    one, but MANY! -- filesystems...)]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)