• NVMe long in the tooth?

    From Don Y@21:1/5 to All on Tue Jul 16 13:44:34 2024
    It's been more than a decade; how much longer before
    The Next Great Solution renders it obsolescent?

    Or, is it worth virtualizing the i/f -- at the expense
    of performance ("latency"; "throughput" could still be
    maintained) -- for a more future safe approach?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Purgert@21:1/5 to Don Y on Thu Jul 18 10:05:05 2024
    On 2024-07-16, Don Y wrote:
    It's been more than a decade; how much longer before
    The Next Great Solution renders it obsolescent?

    Meh, we'll have to replace PCIe first. Remember that the interface
    (currently M.2) is basically "just" a direct PCIe x4 connection back to
    the CPU.

    After that, it's just the integrated drive electronics that talk to the
    storage media itself (well via the NVMe driver, as opposed to the AHCI
    driver ala SATA).

    --
    |_|O|_|
    |_|_|O| Github: https://github.com/dpurgert
    |O|O|O| PGP: DDAB 23FB 19FA 7D85 1CC1 E067 6D65 70E5 4CE7 2860

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Dan Purgert on Thu Jul 18 06:34:55 2024
    On 7/18/2024 3:05 AM, Dan Purgert wrote:
    On 2024-07-16, Don Y wrote:
    It's been more than a decade; how much longer before
    The Next Great Solution renders it obsolescent?

    Meh, we'll have to replace PCIe first. Remember that the interface (currently M.2) is basically "just" a direct PCIe x4 connection back to
    the CPU.

    If you're going to put a processor *in* a product, relying on having
    one or more PCIe lanes available (in a SoC/MCU/MPU) significantly limits
    your choice of SoC/MCU/MPU.

    NVMe really is just the command set/protocol; M.2 is a transport choice ("NVMe-ofer-PCIe").

    [I'd opt for "-over-Fabric" to more effectively decouple the hardware.
    SCSI : iSCSI :: NVMe : NVMe-oF]

    My concern is for whether or not there may be other protocols coming
    down the pike that anticipate hardware in a similar way that NVMe came
    about to anticipate FLASH storage's additional capabilities that,
    e.g., (i)SCSI wasn't able to leverage.

    After that, it's just the integrated drive electronics that talk to the storage media itself (well via the NVMe driver, as opposed to the AHCI
    driver ala SATA).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jasen Betts@21:1/5 to Don Y on Fri Jul 19 05:07:32 2024
    On 2024-07-16, Don Y <blockedofcourse@foo.invalid> wrote:
    It's been more than a decade; how much longer before
    The Next Great Solution renders it obsolescent?

    Or, is it worth virtualizing the i/f -- at the expense
    of performance ("latency"; "throughput" could still be
    maintained) -- for a more future safe approach?

    It's PCI express on a different connector, it should be good for a while.

    Intel will perhaps be releasing optical PC interconnect any year now,
    and soon after that there will be optical ram and storage. Once they
    figure out how to manufacture optical circuitboards.

    --
    Jasen.
    🇺🇦 Слава Україні

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Jasen Betts on Thu Jul 18 23:26:15 2024
    On 7/18/2024 10:07 PM, Jasen Betts wrote:
    On 2024-07-16, Don Y <blockedofcourse@foo.invalid> wrote:
    It's been more than a decade; how much longer before
    The Next Great Solution renders it obsolescent?

    Or, is it worth virtualizing the i/f -- at the expense
    of performance ("latency"; "throughput" could still be
    maintained) -- for a more future safe approach?

    It's PCI express on a different connector, it should be good for a while.

    *One* transport is "over-PCIe". But, you don't see multiple
    (e.g.) M.2 cards in systems like you would have seen multiple SAS/SATA/PATA/SCSI HDDs. So, it seems to be more of a niche
    interface in much the same way that you see only a few PCIe i/fs
    on a motherboard (for specific I/Os).

    And, I question if it has "missed" the potential for RAM disks
    in much the same way SATA missed the potential for FLASH disks.
    E.g., it is conceivable to put a multigigabyte RAM disk on
    a 32b CPU in much the same way that one would put a multigigabyte
    HDD (or SSD) on said CPU (MCU). There's no need (and little
    value) to upgrade to a 64b CPU just so the RAM would be directly
    addressable!

    Note that amount of data is an orthogonal issue to speed of
    access (latency & throughput). Many applications would likely
    be comfortable with 0ms RAM disks as they would have with
    10ms HDDs.

    Intel will perhaps be releasing optical PC interconnect any year now,
    and soon after that there will be optical ram and storage. Once they
    figure out how to manufacture optical circuitboards.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Purgert@21:1/5 to Don Y on Fri Jul 19 09:33:23 2024
    On 2024-07-19, Don Y wrote:
    On 7/18/2024 10:07 PM, Jasen Betts wrote:
    On 2024-07-16, Don Y <blockedofcourse@foo.invalid> wrote:
    It's been more than a decade; how much longer before
    The Next Great Solution renders it obsolescent?

    Or, is it worth virtualizing the i/f -- at the expense
    of performance ("latency"; "throughput" could still be
    maintained) -- for a more future safe approach?

    It's PCI express on a different connector, it should be good for a while.

    *One* transport is "over-PCIe". But, you don't see multiple
    (e.g.) M.2 cards in systems like you would have seen multiple SAS/SATA/PATA/SCSI HDDs. So, it seems to be more of a niche
    interface in much the same way that you see only a few PCIe i/fs
    on a motherboard (for specific I/Os).

    Obviously you've never owned a laptop. M.2 is basically the standard
    connector for peripherals that were previously mini-pcie / mini-sata
    interfaces for harddrives, wifi, cellular modems, etc.

    A desktop PC will likely only ever have one on the motherboard, maybe 2
    if you get one with "integrated wifi", as I've seen some desktops have.
    But that's not exactly a problem -- bulk storage still works "fine" on
    "slow SATA drives". That being said, it's not like desktop motherboards
    are short of PCI slots -- just throw in a M.2 breakout and you can add a
    few more drives.


    And, I question if it has "missed" the potential for RAM disks
    in much the same way SATA missed the potential for FLASH disks.

    SATA works fine for SSD though. It's just that engineers went with a
    new non-backwards-compatible controller + drive electronics ("NVMe")
    instead of trying to make a SATA4 specification.


    --
    |_|O|_|
    |_|_|O| Github: https://github.com/dpurgert
    |O|O|O| PGP: DDAB 23FB 19FA 7D85 1CC1 E067 6D65 70E5 4CE7 2860

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Dan Purgert on Fri Jul 19 06:08:10 2024
    On 7/19/2024 2:33 AM, Dan Purgert wrote:
    On 2024-07-19, Don Y wrote:
    On 7/18/2024 10:07 PM, Jasen Betts wrote:
    On 2024-07-16, Don Y <blockedofcourse@foo.invalid> wrote:
    It's been more than a decade; how much longer before
    The Next Great Solution renders it obsolescent?

    Or, is it worth virtualizing the i/f -- at the expense
    of performance ("latency"; "throughput" could still be
    maintained) -- for a more future safe approach?

    It's PCI express on a different connector, it should be good for a while. >>
    *One* transport is "over-PCIe". But, you don't see multiple
    (e.g.) M.2 cards in systems like you would have seen multiple
    SAS/SATA/PATA/SCSI HDDs. So, it seems to be more of a niche
    interface in much the same way that you see only a few PCIe i/fs
    on a motherboard (for specific I/Os).

    Obviously you've never owned a laptop. M.2 is basically the standard connector for peripherals that were previously mini-pcie / mini-sata interfaces for harddrives, wifi, cellular modems, etc.

    Laptops don't have lots of devices -- of any kind. By contrast,
    a desktop/server can easily have dozens of storage devices in addition
    to add-in cards for HBAs, NICs, display adapters, etc.

    A desktop PC will likely only ever have one on the motherboard, maybe 2
    if you get one with "integrated wifi", as I've seen some desktops have.
    But that's not exactly a problem -- bulk storage still works "fine" on
    "slow SATA drives". That being said, it's not like desktop motherboards
    are short of PCI slots -- just throw in a M.2 breakout and you can add a
    few more drives.

    How's that different than adding another SAS/SCSI/SATA HBA and a few
    DOZEN more (external) drives on a desktop? What you *could* do isn't
    the issue.

    Said another way, why do we still see desktops with SATA/SAS interfaces
    instead of a bunch of M.2 connectors? Obviously, one could replace
    SATA SSDs with M.2 SSDs -- yet manufacturers keep offering SATA i/fs.

    And, I question if it has "missed" the potential for RAM disks
    in much the same way SATA missed the potential for FLASH disks.

    SATA works fine for SSD though. It's just that engineers went with a
    new non-backwards-compatible controller + drive electronics ("NVMe")
    instead of trying to make a SATA4 specification.

    One can say that about the evolution of ALL the interface standards.
    When will NVMe (over PCIe) *replace* SATA -- in much the same way
    that SATA replaced PATA? Will some NVMe successor be waiting in the
    wings to replace NVMe (instead of an NVMe++)?

    Interfaces evolve to support additional features that are deemed
    valuable (or essential) to augment an existing technology. Why can't
    the "ultimate" interface appear and skip all of these incremental
    changes? Ans: because one can't anticipate -- OR AFFORD -- all of the
    future technology developments! (How long did SASI languish before
    SCSI went mainstream? Didn't anyone sit down and think that 32b wide-SCSI
    was not economically practical? (So, why formalize it??)

    Many SATA controllers exploit the same sort of interface to memory
    as NVMe (i.e., PCIe). Yet, SATA SSDs are typically slower than
    NVMe SSDs (M.2). SATA (III) just isn't a fat enough *interface*
    pipe given the capabilities of the FLASH medium.

    NVMe, however, builds extra complexity into the i/f that would be
    silly for a device where read times and write times were identical
    (and on a par with FLASH reads). Will a successor i/f be defined
    to address that in much the same way that other i/fs have evolved
    to eschew unnecessary complexity? (what value command queues if
    a command can be executed "instantly"?)

    NVMe has already foreseen the eventual (?) move to other transport
    protocols beyond PCIe. Why hasn't it also anticipated other
    faster-than-FLASH media?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Purgert@21:1/5 to Don Y on Fri Jul 19 15:40:25 2024
    On 2024-07-19, Don Y wrote:
    On 7/19/2024 2:33 AM, Dan Purgert wrote:
    On 2024-07-19, Don Y wrote:
    On 7/18/2024 10:07 PM, Jasen Betts wrote:
    On 2024-07-16, Don Y <blockedofcourse@foo.invalid> wrote:
    It's been more than a decade; how much longer before
    The Next Great Solution renders it obsolescent?

    Or, is it worth virtualizing the i/f -- at the expense
    of performance ("latency"; "throughput" could still be
    maintained) -- for a more future safe approach?

    It's PCI express on a different connector, it should be good for a while. >>>
    *One* transport is "over-PCIe". But, you don't see multiple
    (e.g.) M.2 cards in systems like you would have seen multiple
    SAS/SATA/PATA/SCSI HDDs. So, it seems to be more of a niche
    interface in much the same way that you see only a few PCIe i/fs
    on a motherboard (for specific I/Os).

    Obviously you've never owned a laptop. M.2 is basically the standard
    connector for peripherals that were previously mini-pcie / mini-sata
    interfaces for harddrives, wifi, cellular modems, etc.

    Laptops don't have lots of devices -- of any kind. By contrast,
    a desktop/server can easily have dozens of storage devices in addition
    to add-in cards for HBAs, NICs, display adapters, etc.

    My point was that your assertion of "you don't see multiple M.2 cards in systems" is provably false, as all laptop expansion cards (NVMe, WiFi,
    cellular modem / WWAN adapter, etc.) utilize the M.2 interface.


    [...]
    Said another way, why do we still see desktops with SATA/SAS interfaces instead of a bunch of M.2 connectors? Obviously, one could replace
    SATA SSDs with M.2 SSDs -- yet manufacturers keep offering SATA i/fs.

    Modern "high end" ASUS motherboard (i.e. shipping with PCIe 5.0, etc) on
    amazon shows five (5) M.2 slots, all capable of supporting NVMe SSD
    *AND* 4 SATA ports.

    Thing is though, it's a hard sell to dump SATA ports. Mechanical drives
    are just so cheap compared to SSD.

    --
    |_|O|_|
    |_|_|O| Github: https://github.com/dpurgert
    |O|O|O| PGP: DDAB 23FB 19FA 7D85 1CC1 E067 6D65 70E5 4CE7 2860

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Dan Purgert on Fri Jul 19 09:23:11 2024
    On 7/19/2024 8:40 AM, Dan Purgert wrote:
    Laptops don't have lots of devices -- of any kind. By contrast,
    a desktop/server can easily have dozens of storage devices in addition
    to add-in cards for HBAs, NICs, display adapters, etc.

    My point was that your assertion of "you don't see multiple M.2 cards in systems" is provably false, as all laptop expansion cards (NVMe, WiFi, cellular modem / WWAN adapter, etc.) utilize the M.2 interface.

    They are leveraging a smaller form factor with access to the PCIe bus.
    Much like you'd see floppy disks connected via USB instead of a
    legacy floppy controller. They aren't leveraging NVMe (NonVolatile
    Memory via PCIexpress). Or, a *fan* stealing power and mounting from
    a PCI slot (surely there is nothing inherently related to PCI in
    the needs of that fan!)

    "You don't see multiple M.2 cards USED FOR MASS STORAGE DEVICES in
    systems". Happy?

    Said another way, why do we still see desktops with SATA/SAS interfaces
    instead of a bunch of M.2 connectors? Obviously, one could replace
    SATA SSDs with M.2 SSDs -- yet manufacturers keep offering SATA i/fs.

    Modern "high end" ASUS motherboard (i.e. shipping with PCIe 5.0, etc) on amazon shows five (5) M.2 slots, all capable of supporting NVMe SSD
    *AND* 4 SATA ports.

    So, how am I gonna get 12T in that laptop? And, how many other
    alternative products will I have to choose from?

    Now, imagine having to *build* such a device. How many choices of MCUs/SoCs
    do you have that will support that sort of interface? And, how will you maintain that product for 20+ years?

    Thing is though, it's a hard sell to dump SATA ports. Mechanical drives
    are just so cheap compared to SSD.

    That assumes you don't care about the cost (labor/inconvenience/downtime)
    of maintaining them, the power they consume, cooling requirements, etc.

    As a designer, you have to consider all of those. These are all factors
    that people are looking at in future decisions. (I've canvassed potential customers and they are surprisingly unconcerned with the up-front costs
    of kit. Rather, they want to know what it is going to cost them to
    *run* the kit -- do they have to put someone on staff to maintain/operate
    it, hire a "professional" to service it, site it in a specific location
    or environment, etc.)

    Hence the decline of desktops and the switch to ever smaller devices
    (desktops -> laptops -> phones). You already plan on replacing these
    smaller devices "often" so "maintenance" is "free"! Not true of factory floors, environmental control systems, communications infrastructure...

    [I have a colleague who spends much of his time chasing down old Sun
    (Oracle) boxes -- auctions, swap meets, classifieds, ... Because his
    client has too much invested in Sun-hosted systems and figures its
    easier to have him scramble to find and restore old kit than to
    redesign the system for a more modern implementation. What do you
    do when you can't find an SCA or FC/AL drive to replace one that
    shits the bed?]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Purgert@21:1/5 to Don Y on Sun Jul 21 14:41:03 2024
    On 2024-07-19, Don Y wrote:
    On 7/19/2024 8:40 AM, Dan Purgert wrote:
    Laptops don't have lots of devices -- of any kind. By contrast,
    a desktop/server can easily have dozens of storage devices in addition
    to add-in cards for HBAs, NICs, display adapters, etc.

    My point was that your assertion of "you don't see multiple M.2 cards in
    systems" is provably false, as all laptop expansion cards (NVMe, WiFi,
    cellular modem / WWAN adapter, etc.) utilize the M.2 interface.

    They are leveraging a smaller form factor with access to the PCIe bus.

    It's still "M.2".

    [...]
    "You don't see multiple M.2 cards USED FOR MASS STORAGE DEVICES in
    systems". Happy?

    Except you can buy motherboards that support multiple NVMe drives.
    Still a bit on the bleeding edge (and expensive), but the option is
    there.



    Said another way, why do we still see desktops with SATA/SAS interfaces
    instead of a bunch of M.2 connectors? Obviously, one could replace
    SATA SSDs with M.2 SSDs -- yet manufacturers keep offering SATA i/fs.

    Modern "high end" ASUS motherboard (i.e. shipping with PCIe 5.0, etc) on
    amazon shows five (5) M.2 slots, all capable of supporting NVMe SSD
    *AND* 4 SATA ports.

    So, how am I gonna get 12T in that laptop? And, how many other
    alternative products will I have to choose from?

    Laptop? Dude, you're moving the goalposts so hard you can't even keep
    your own scenarios straight.


    [...]

    Thing is though, it's a hard sell to dump SATA ports. Mechanical drives
    are just so cheap compared to SSD.

    That assumes you don't care about the cost (labor/inconvenience/downtime)
    of maintaining them, the power they consume, cooling requirements, etc.

    Given that Backblaze is still putting out quarterly reports that
    primarily feature mechanical drives; seems that they're still the go-to
    for bulk storage.


    --
    |_|O|_|
    |_|_|O| Github: https://github.com/dpurgert
    |O|O|O| PGP: DDAB 23FB 19FA 7D85 1CC1 E067 6D65 70E5 4CE7 2860

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Dan Purgert on Sun Jul 21 11:11:59 2024
    On 7/21/2024 7:41 AM, Dan Purgert wrote:
    On 2024-07-19, Don Y wrote:
    On 7/19/2024 8:40 AM, Dan Purgert wrote:
    Laptops don't have lots of devices -- of any kind. By contrast,
    a desktop/server can easily have dozens of storage devices in addition >>>> to add-in cards for HBAs, NICs, display adapters, etc.

    My point was that your assertion of "you don't see multiple M.2 cards in >>> systems" is provably false, as all laptop expansion cards (NVMe, WiFi,
    cellular modem / WWAN adapter, etc.) utilize the M.2 interface.

    They are leveraging a smaller form factor with access to the PCIe bus.

    It's still "M.2".

    Have you actually *tried* to install an SSD in a slot sized for
    a wifi radio?

    [...]
    "You don't see multiple M.2 cards USED FOR MASS STORAGE DEVICES in
    systems". Happy?

    Except you can buy motherboards that support multiple NVMe drives.
    Still a bit on the bleeding edge (and expensive), but the option is
    there.

    And, what *embedded* MCUs do you have to choose from? Or, are you
    suggesting buying motherboards to get "support for multiple M.2s".

    We're engineers designing products, here -- not IT folks piecing
    together COTS kit, knowing that we can pick from whatever the current
    models are, from year to year (because there's no need for hardware compatibility)

    Said another way, why do we still see desktops with SATA/SAS interfaces >>>> instead of a bunch of M.2 connectors? Obviously, one could replace
    SATA SSDs with M.2 SSDs -- yet manufacturers keep offering SATA i/fs.

    Modern "high end" ASUS motherboard (i.e. shipping with PCIe 5.0, etc) on >>> amazon shows five (5) M.2 slots, all capable of supporting NVMe SSD
    *AND* 4 SATA ports.

    So, how am I gonna get 12T in that laptop? And, how many other
    alternative products will I have to choose from?

    Laptop? Dude, you're moving the goalposts so hard you can't even keep
    your own scenarios straight.

    Desktops aren't even "in the game" so "goalposts" don't apply, there.

    LAPTOPS are the primary devices using small form factor SSDs to save on
    space and power -- issues not important in desktop machines. PHONES
    would be far more appropriate in terms of power, space and thermal characteristics -- but their usage patterns are more "read only"
    than laptops/desktops.

    Existing kit is only useful in determining how THOSE markets/applications
    are addressed. Run a DBMS on a desktop OR laptop and compare the
    results to what a typical consumer experiences. I.e., the existing
    laptop or desktop (or server) is only good as an exemplar from which
    to extrapolate behaviors in other implementations.

    By contrast, I can put 30+TB (HDD) in a desktop without struggling; they
    have ample space, power, and heat dissipating capability that is absent
    in laptops (and completely out of the question in phones)

    For me to *buy* 12T of M.2 SSD, I'm looking at the better part of a kilobuck (in big quantities). And, likely needing to overprovision considerably
    beyond that (10M/s for 20 years is 6000TBW -- and 10MB/s would be abysmally throttled).

    OTOH, I can put 15T on a SAN and WATCH how it performs with simulated loads
    to estimate the requirements for other types of storage. I can instrument
    it to see WHERE the reads and writes are going to evaluate how much "durable" memory I will need vs. how much volatile and long-term ("read only") memory. Why buy an expensive SSD only to discover that you're writing so much to it that it won't last 3 years?

    Thing is though, it's a hard sell to dump SATA ports. Mechanical drives >>> are just so cheap compared to SSD.

    That assumes you don't care about the cost (labor/inconvenience/downtime)
    of maintaining them, the power they consume, cooling requirements, etc.

    Given that Backblaze is still putting out quarterly reports that
    primarily feature mechanical drives; seems that they're still the go-to
    for bulk storage.

    Because most consumer/portable devices don't need much physical storage.
    People are content to rely on their phone for most "computing" needs.
    Or, a laptop for "real" computer uses (with a slow rust disk or small SSD). Aside from email and web surfing, I don't think I can find a neighbor
    who uses a "computer" for anything, anymore! (except gaming)

    The machines that I see being recycled (daily!) are mere shadows of earlier machines. Typically a single spindle (~500G) and 2 short PCI(e) slots that
    are invariably empty (video on motherboard -- two channels! -- along with
    NIC, USB, obsolescent serial) and a 100-200W power supply.

    Data centers aren't overly concerned with space/power/heat (given that
    doubling the capacity of spinning rust doesn't double its power consumption
    so, if you can already power/cool the existing store, you can power/cool
    one that is twice as large).

    Consumer devices don't beat on their storage devices so they want physical
    size and low power from the SSD, not capacity (or performance or durability!). And, will plan on replacing it -- by choice or because something breaks that isn't economically repairable -- in a few years (long before the SSD's/HDD's warranty)

    Data centers want capacity and durability so are willing to trade space
    and power for it. And, have paid staff on hand to ensure high availability; not true of consumer kit. AND, budgets that factor periodic upgrades
    and replacements in to their operating costs. (How many consumers have "budgets" for their computers? And, scheduled hardware update intervals?)

    Imagine a consumer trying to migrate 12T of data onto a new "storage medium" BEFORE the original medium shits the bed... (i.e., it's *my* job to make
    sure he can do that -- by keeping a second copy of his data to safeguard against that inevitability -- essentially billing him for the replacement before he actually needs it)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Purgert@21:1/5 to Don Y on Sun Jul 21 21:09:35 2024
    On 2024-07-21, Don Y wrote:
    On 7/21/2024 7:41 AM, Dan Purgert wrote:
    On 2024-07-19, Don Y wrote:
    On 7/19/2024 8:40 AM, Dan Purgert wrote:
    Laptops don't have lots of devices -- of any kind. By contrast,
    a desktop/server can easily have dozens of storage devices in addition >>>>> to add-in cards for HBAs, NICs, display adapters, etc.

    My point was that your assertion of "you don't see multiple M.2 cards in >>>> systems" is provably false, as all laptop expansion cards (NVMe, WiFi, >>>> cellular modem / WWAN adapter, etc.) utilize the M.2 interface.

    They are leveraging a smaller form factor with access to the PCIe bus.

    It's still "M.2".

    Have you actually *tried* to install an SSD in a slot sized for
    a wifi radio?

    You can't. They've got different keys.

    Same as how DIMMs have different keying; or does that confuse you too?




    [...]
    "You don't see multiple M.2 cards USED FOR MASS STORAGE DEVICES in
    systems". Happy?

    Except you can buy motherboards that support multiple NVMe drives.
    Still a bit on the bleeding edge (and expensive), but the option is
    there.

    And, what *embedded* MCUs do you have to choose from? Or, are you
    suggesting buying motherboards to get "support for multiple M.2s".

    Embedded? A PC motherboard is hardly an "embedded" device.

    Said another way, why do we still see desktops with SATA/SAS interfaces >>>>> instead of a bunch of M.2 connectors? Obviously, one could replace
    SATA SSDs with M.2 SSDs -- yet manufacturers keep offering SATA i/fs. >>>>
    Modern "high end" ASUS motherboard (i.e. shipping with PCIe 5.0, etc) on >>>> amazon shows five (5) M.2 slots, all capable of supporting NVMe SSD
    *AND* 4 SATA ports.

    So, how am I gonna get 12T in that laptop? And, how many other
    alternative products will I have to choose from?

    Laptop? Dude, you're moving the goalposts so hard you can't even keep
    your own scenarios straight.

    Desktops aren't even "in the game" so "goalposts" don't apply, there.

    You said:
    Said another way, why do we still see desktops with SATA/SAS interfaces

    Hey, look at that, you asked about desktops.


    --
    |_|O|_|
    |_|_|O| Github: https://github.com/dpurgert
    |O|O|O| PGP: DDAB 23FB 19FA 7D85 1CC1 E067 6D65 70E5 4CE7 2860

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Dan Purgert on Sun Jul 21 15:52:25 2024
    On 7/21/2024 2:09 PM, Dan Purgert wrote:
    Laptops don't have lots of devices -- of any kind. By contrast,
    a desktop/server can easily have dozens of storage devices in addition >>>>>> to add-in cards for HBAs, NICs, display adapters, etc.

    My point was that your assertion of "you don't see multiple M.2 cards in >>>>> systems" is provably false, as all laptop expansion cards (NVMe, WiFi, >>>>> cellular modem / WWAN adapter, etc.) utilize the M.2 interface.

    They are leveraging a smaller form factor with access to the PCIe bus.

    It's still "M.2".

    Have you actually *tried* to install an SSD in a slot sized for
    a wifi radio?

    You can't. They've got different keys.

    So, you can't install 5 M.2 SSD devices. If it's not an SSD, what value
    to this conversation?

    Same as how DIMMs have different keying; or does that confuse you too?

    [...]
    "You don't see multiple M.2 cards USED FOR MASS STORAGE DEVICES in
    systems". Happy?

    Except you can buy motherboards that support multiple NVMe drives.
    Still a bit on the bleeding edge (and expensive), but the option is
    there.

    And, what *embedded* MCUs do you have to choose from? Or, are you
    suggesting buying motherboards to get "support for multiple M.2s".

    Embedded? A PC motherboard is hardly an "embedded" device.

    Exactly. I'm not an IT person integrating COTS devices but, rather,
    an engineer designing devices with available components. To use
    the PCIe interface to NVMe, I'd need to constrain my choice of
    processors to those that support PCIe -- JUST for the nonvolatile
    memory requirement!

    PCIe is a time-limited technology. Thus, tying it to NVMe suggests
    avoiding it.

    But, NVMe is a PROTOCOL and if you remove the PCIe requirement,
    can be more "future safe". THAT was the point of the original
    question (I'm using iSCSI in my prototype and that is obsolescent
    in the time scales mentioned; how far behind iSCSI will NVMe be,
    given that one can already imagine applications that it fails
    to address?)

    Said another way, why do we still see desktops with SATA/SAS interfaces >>>>>> instead of a bunch of M.2 connectors? Obviously, one could replace >>>>>> SATA SSDs with M.2 SSDs -- yet manufacturers keep offering SATA i/fs. >>>>>
    Modern "high end" ASUS motherboard (i.e. shipping with PCIe 5.0, etc) on >>>>> amazon shows five (5) M.2 slots, all capable of supporting NVMe SSD
    *AND* 4 SATA ports.

    So, how am I gonna get 12T in that laptop? And, how many other
    alternative products will I have to choose from?

    Laptop? Dude, you're moving the goalposts so hard you can't even keep
    your own scenarios straight.

    Desktops aren't even "in the game" so "goalposts" don't apply, there.

    You said:
    Said another way, why do we still see desktops with SATA/SAS interfaces

    Hey, look at that, you asked about desktops.

    .. as a large market user of HIGH CAPACITY and high durability storage
    devices. Should I have asked about microwave ovens, instead? (they store cooking patterns) Or, personal media players? How many devices can
    you think of that would conceivably have multiple terabytes of
    connected R/W storage? If we're only talking about kilobytes or
    megabytes, why bother with any i/f other than the raw NAND/NOR one?

    There's nothing to stop a *disk* manufacturer from adopting NVMe as an interface -- we've had ST506, PATA, SATA, SASI/SCSI, SAS, FC-AL, SCA, USB,
    FW, etc. all in front of the same spinning rust platters. Why no market
    for NVMe? Imagine how much easier for motherboard/laptop manufacturers
    if they could standardize on ONE i/f for "storage media" instead of
    having to accommodate two (or more -- think: memory CARDS)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)