• Bargain LTSpice/Lab laptop

    From bitrex@21:1/5 to All on Mon Jan 24 01:26:04 2022
    <https://www.walmart.com/ip/Fujitsu-Lifebook-U745-Ultrabook-Refurbished-14-Intel-Core-i7-5600U-2-6-GHz-512-SSD-12-GB-Ram/116957071>

    Last of the Japanese/German-made business-class machines; several years
    old now but AFAIK they're well-built and not a PITA to work on, and even
    a refurb should be good for a few more years of service...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rick C@21:1/5 to bitrex on Mon Jan 24 12:28:23 2022
    On Monday, January 24, 2022 at 1:26:11 AM UTC-5, bitrex wrote:
    <https://www.walmart.com/ip/Fujitsu-Lifebook-U745-Ultrabook-Refurbished-14-Intel-Core-i7-5600U-2-6-GHz-512-SSD-12-GB-Ram/116957071>

    Last of the Japanese/German-made business-class machines; several years
    old now but AFAIK they're well-built and not a PITA to work on, and even
    a refurb should be good for a few more years of service...

    I bought a 17" new laptop with just 12 GB of RAM as a second computer when my first died and I needed something right away to copy data to off the old hard drive. It was very light and nice to use in a portable setting. But the combination of 12 GB RAM
    and the rotating hard drive was just too slow. I ended up getting another 17" inch machine with 1 TB flash drive and 16 GB of RAM expecting to have to upgrade to 32 GB... but it runs very well, even when the 16 GB is maxed out. That has to be due to
    the flash drive being so much faster than a rotating drive. I've never bothered to upgrade it. Maybe if I were running simulations a lot that would show up... or I could just close a browser or two. They are the real memory hogs these day.

    The new machine is not as light as the other one, but still much lighter than my Dell Precision gut buster. I ended up returning the 12 GB machine when I found the RAM was not upgradable.

    --

    Rick C.

    - Get 1,000 miles of free Supercharging
    - Tesla referral code - https://ts.la/richard11209

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bitrex@21:1/5 to Rick C on Mon Jan 24 15:39:31 2022
    On 1/24/2022 3:28 PM, Rick C wrote:
    On Monday, January 24, 2022 at 1:26:11 AM UTC-5, bitrex wrote:
    <https://www.walmart.com/ip/Fujitsu-Lifebook-U745-Ultrabook-Refurbished-14-Intel-Core-i7-5600U-2-6-GHz-512-SSD-12-GB-Ram/116957071>

    Last of the Japanese/German-made business-class machines; several years
    old now but AFAIK they're well-built and not a PITA to work on, and even
    a refurb should be good for a few more years of service...

    I bought a 17" new laptop with just 12 GB of RAM as a second computer when my first died and I needed something right away to copy data to off the old hard drive. It was very light and nice to use in a portable setting. But the combination of 12 GB
    RAM and the rotating hard drive was just too slow. I ended up getting another 17" inch machine with 1 TB flash drive and 16 GB of RAM expecting to have to upgrade to 32 GB... but it runs very well, even when the 16 GB is maxed out. That has to be due
    to the flash drive being so much faster than a rotating drive. I've never bothered to upgrade it. Maybe if I were running simulations a lot that would show up... or I could just close a browser or two. They are the real memory hogs these day.

    The new machine is not as light as the other one, but still much lighter than my Dell Precision gut buster. I ended up returning the 12 GB machine when I found the RAM was not upgradable.


    I run 32 GB on my main desktop since I upgraded to Ryzen 5 this year,
    which seems adequate for just about anything I throw at it.

    I'd be surprised if that Fujitsu can't be upgraded to at least 16.

    Another nice deal for mass storage/backups of work files are these
    surplus Dell H700 hardware RAID controllers, if you have a spare 4x or
    wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to be
    in servers probably but they work fine OOTB with Windows 10/11 and the
    modern Linux distros I've tried, and you don't have to muck with the OS software RAID or the motherboard's software RAID.

    Yes a RAID array isn't a backup but I don't see any reason not to have
    your on-site backup in RAID 1.

    <https://www.amazon.com/Dell-Controller-Standard-Profile-J9MR2/dp/B01J4744L0/>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bitrex@21:1/5 to bitrex on Mon Jan 24 15:40:34 2022
    On 1/24/2022 3:39 PM, bitrex wrote:
    On 1/24/2022 3:28 PM, Rick C wrote:
    On Monday, January 24, 2022 at 1:26:11 AM UTC-5, bitrex wrote:
    <https://www.walmart.com/ip/Fujitsu-Lifebook-U745-Ultrabook-Refurbished-14-Intel-Core-i7-5600U-2-6-GHz-512-SSD-12-GB-Ram/116957071>


    Last of the Japanese/German-made business-class machines; several years
    old now but AFAIK they're well-built and not a PITA to work on, and even >>> a refurb should be good for a few more years of service...

    I bought a 17" new laptop with just 12 GB of RAM as a second computer
    when my first died and I needed something right away to copy data to
    off the old hard drive.  It was very light and nice to use in a
    portable setting.  But the combination of 12 GB RAM and the rotating
    hard drive was just too slow.  I ended up getting another 17" inch
    machine with 1 TB flash drive and 16 GB of RAM expecting to have to
    upgrade to 32 GB... but it runs very well, even when the 16 GB is
    maxed out.  That has to be due to the flash drive being so much faster
    than a rotating drive.  I've never bothered to upgrade it.  Maybe if I
    were running simulations a lot that would show up... or I could just
    close a browser or two.  They are the real memory hogs these day.

    The new machine is not as light as the other one, but still much
    lighter than my Dell Precision gut buster.  I ended up returning the
    12 GB machine when I found the RAM was not upgradable.


    I run 32 GB on my main desktop since I upgraded to Ryzen 5 this year,
    which seems adequate for just about anything I throw at it.

    I'd be surprised if that Fujitsu can't be upgraded to at least 16.

    Another nice deal for mass storage/backups of work files are these
    surplus Dell H700 hardware RAID controllers, if you have a spare 4x or
    wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to be
    in servers probably but they work fine OOTB with Windows 10/11 and the
    modern Linux distros I've tried, and you don't have to muck with the OS software RAID or the motherboard's software RAID.

    Yes a RAID array isn't a backup

    Isn't a stand-alone backup "policy", rather

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From DecadentLinuxUserNumeroUno@decadenc@21:1/5 to Rick C on Mon Jan 24 22:11:16 2022
    Rick C <gnuarm.deletethisbit@gmail.com> wrote in news:685c2c10-c084-4d06-8f00-bf47fae4ee30n@googlegroups.com:

    That has to be due to the flash drive being so much faster than a
    rotating drive.

    Could easily be processor related as well. Make a bigger user
    defined swap space on it. It would probably run faster under Ubuntu
    (any Linux) as well.

    I have a now three year old 17" Lenovo P71 as my main PC.

    It has an SSD as well as a spinning drive in it but is powered by a
    graphics workstation class Xeon and Quadro graphics pushing a 4k
    display and would push several more via the thrunderbolt I/O ports.
    And it is only 16GB RAM. It will likely be the last full PC machine I
    own. For $3500 for a $5000 machine it ought to last for years. No disappointments for me.

    It is my 3D CAD workstation and has Windows 10 Pro Workstation on it.
    I keep it fooly upgraded and have never had a problem and it benchmarks
    pretty dag nab fast too. And I also have the docking station for it
    which was another $250. Could never be more pleased. The only
    drawback is that it weighs a ton and it is nearly impossible to find a
    backpack that will fit it. I know now why college kids stay below 17"
    form factor machines.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rick C@21:1/5 to DecadentLinux...@decadence.org on Mon Jan 24 14:51:42 2022
    On Monday, January 24, 2022 at 5:11:25 PM UTC-5, DecadentLinux...@decadence.org wrote:
    Rick C <gnuarm.del...@gmail.com> wrote in news:685c2c10-c084-4d06...@googlegroups.com:
    That has to be due to the flash drive being so much faster than a
    rotating drive.
    Could easily be processor related as well. Make a bigger user
    defined swap space on it. It would probably run faster under Ubuntu
    (any Linux) as well.

    I have a now three year old 17" Lenovo P71 as my main PC.

    It has an SSD as well as a spinning drive in it but is powered by a
    graphics workstation class Xeon and Quadro graphics pushing a 4k
    display and would push several more via the thrunderbolt I/O ports.
    And it is only 16GB RAM. It will likely be the last full PC machine I
    own. For $3500 for a $5000 machine it ought to last for years. No disappointments for me.

    It is my 3D CAD workstation and has Windows 10 Pro Workstation on it.
    I keep it fooly upgraded and have never had a problem and it benchmarks pretty dag nab fast too. And I also have the docking station for it
    which was another $250. Could never be more pleased. The only
    drawback is that it weighs a ton and it is nearly impossible to find a backpack that will fit it. I know now why college kids stay below 17"
    form factor machines.

    I've had only 17" machines since day one of my laptops and always find an adequate bag for them. I had a couple of fabric bags which held the machines well, but when I got the Dell monster it was a tight squeeze. Then a guy was selling leather at
    Costco. I bought a wallet and a brief bag (not hard sided so I can't call it a case). It's not quite a computer bag as it has no padding, not even for the corners. Again, the Dell fit, but tightly. Now that I have this thing (a Lenovo which I swore
    I would never buy again, but here I am) and could even fit the lesser 17 inch laptop in the bag at the same time! It doesn't have as many nooks and crannies, but everything fits and the bag drops into the sizer at the airport for a "personal" bag.

    I've always been anxious about bags on airplanes. I've seen too many cases of the ticket guys being jerks and making people pay for extra baggage or even requiring them to check bags that don't fit the outline. I was boarding my most recent flight and
    the guy didn't like my plastic grocery store bag asking what was in it. I told him it was food for the flight and clothing. I was going from 32 °F to 82 °F and had a bulky sweater and warm gloves I had already taken off before the flight. The guy
    told me to put the clothes in the computer bag as if they would fit!!! I pushed back explaining this was what I had to wear to get to the airport without having hypothermia. He had to mull that over and let me board the plane. WTF??!!!

    I saw another guy doing the same thing with a family who's children had plastic bags with souvenir stuffed animals or something. Spirit wants $65 each for carry on at the gate. He didn't even recommend that they stuff it all into a single bag. Total
    jerk! No wonder I don't like airlines.

    --

    Rick C.

    + Get 1,000 miles of free Supercharging
    + Tesla referral code - https://ts.la/richard11209

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bitrex on Mon Jan 24 23:48:02 2022
    On 24/01/2022 21:39, bitrex wrote:

    Another nice deal for mass storage/backups of work files are these
    surplus Dell H700 hardware RAID controllers, if you have a spare 4x or
    wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to be
    in servers probably but they work fine OOTB with Windows 10/11 and the
    modern Linux distros I've tried, and you don't have to muck with the OS software RAID or the motherboard's software RAID.

    Yes a RAID array isn't a backup but I don't see any reason not to have
    your on-site backup in RAID 1.


    You use RAID for three purposes, which may be combined - to get higher
    speeds (for your particular usage), to get more space (compared to a
    single drive), or to get reliability and better up-time in the face of
    drive failures.

    Yes, you should use RAID on your backups - whether it be a server with
    disk space for copies of data, or "manual RAID1" by making multiple
    backups to separate USB flash drives. But don't imagine RAID is
    connected with "backup" in any way.


    From my experience with RAID, I strongly recommend you dump these kind
    of hardware RAID controllers. Unless you are going for serious
    top-shelf equipment with battery backup, guaranteed response time by
    recovery engineers with spare parts and that kind of thing, use Linux
    software raid. It is far more flexible, faster, more reliable and -
    most importantly - much easier to recover in the case of hardware failure.

    Any RAID system (assuming you don't pick RAID0) can survive a disk
    failure. The important points are how you spot the problem (does your
    system send you an email, or does it just put on an LED and quietly beep
    to itself behind closed doors?), and how you can recover. Your fancy
    hardware RAID controller card is useless when you find you can't get a replacement disk that is on the manufacturer's "approved" list from a
    decade ago. (With Linux, you can use /anything/ - real, virtual, local, remote, flash, disk, whatever.) And what do you do when the RAID card
    dies (yes, that happens) ? For many cards, the format is proprietary
    and your data is gone unless you can find some second-hand replacement
    in a reasonable time-scale. (With Linux, plug the drives into a new
    system.)

    I have only twice lost data from RAID systems (and had to restore them
    from backup). Both times it was hardware RAID - good quality Dell and
    IBM stuff. Those are, oddly, the only two hardware RAID systems I have
    used. A 100% failure rate.

    (BSD and probably most other *nix systems have perfectly good software
    RAID too, if you don't like Linux.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bitrex@21:1/5 to David Brown on Mon Jan 24 20:14:47 2022
    On 1/24/2022 5:48 PM, David Brown wrote:


    Any RAID system (assuming you don't pick RAID0) can survive a disk
    failure. The important points are how you spot the problem (does your
    system send you an email, or does it just put on an LED and quietly beep
    to itself behind closed doors?), and how you can recover. Your fancy hardware RAID controller card is useless when you find you can't get a replacement disk that is on the manufacturer's "approved" list from a
    decade ago. (With Linux, you can use /anything/ - real, virtual, local, remote, flash, disk, whatever.) And what do you do when the RAID card
    dies (yes, that happens) ? For many cards, the format is proprietary
    and your data is gone unless you can find some second-hand replacement
    in a reasonable time-scale. (With Linux, plug the drives into a new
    system.)

    It's not the last word in backup, why should I have to do any of that I
    just go get new modern controller and drives and restore from my
    off-site backup...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bitrex@21:1/5 to David Brown on Mon Jan 24 20:03:45 2022
    On 1/24/2022 5:48 PM, David Brown wrote:
    On 24/01/2022 21:39, bitrex wrote:

    Another nice deal for mass storage/backups of work files are these
    surplus Dell H700 hardware RAID controllers, if you have a spare 4x or
    wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to be
    in servers probably but they work fine OOTB with Windows 10/11 and the
    modern Linux distros I've tried, and you don't have to muck with the OS
    software RAID or the motherboard's software RAID.

    Yes a RAID array isn't a backup but I don't see any reason not to have
    your on-site backup in RAID 1.


    You use RAID for three purposes, which may be combined - to get higher
    speeds (for your particular usage), to get more space (compared to a
    single drive), or to get reliability and better up-time in the face of
    drive failures.

    Yes, you should use RAID on your backups - whether it be a server with
    disk space for copies of data, or "manual RAID1" by making multiple
    backups to separate USB flash drives. But don't imagine RAID is
    connected with "backup" in any way.


    From my experience with RAID, I strongly recommend you dump these kind
    of hardware RAID controllers. Unless you are going for serious
    top-shelf equipment with battery backup, guaranteed response time by
    recovery engineers with spare parts and that kind of thing, use Linux software raid. It is far more flexible, faster, more reliable and -
    most importantly - much easier to recover in the case of hardware failure.

    Any RAID system (assuming you don't pick RAID0) can survive a disk
    failure. The important points are how you spot the problem (does your
    system send you an email, or does it just put on an LED and quietly beep
    to itself behind closed doors?), and how you can recover. Your fancy hardware RAID controller card is useless when you find you can't get a replacement disk that is on the manufacturer's "approved" list from a
    decade ago. (With Linux, you can use /anything/ - real, virtual, local, remote, flash, disk, whatever.) And what do you do when the RAID card
    dies (yes, that happens) ? For many cards, the format is proprietary
    and your data is gone unless you can find some second-hand replacement
    in a reasonable time-scale. (With Linux, plug the drives into a new
    system.)

    I have only twice lost data from RAID systems (and had to restore them
    from backup). Both times it was hardware RAID - good quality Dell and
    IBM stuff. Those are, oddly, the only two hardware RAID systems I have
    used. A 100% failure rate.

    (BSD and probably most other *nix systems have perfectly good software
    RAID too, if you don't like Linux.)

    I'm considering a hybrid scheme where the system partition is put on the
    HW controller in RAID 1, non-critical files but want fast access to like audio/video are on HW RAID 0, and the more critical long-term on-site
    mass storage that's not accessed too much is in some kind of software redundant-RAID equivalent, with changes synced to cloud backup service.

    That way you can boot from something other than the dodgy motherboard software-RAID but you're not dead in the water if the OS drive fails,
    and can probably use the remaining drive to create a today-image of the
    system partition to restore from.

    Worst-case you restore the system drive from your last image or from
    scratch if you have to, restoring the system drive from scratch isn't a
    crisis but it is seriously annoying, and most people don't do system
    drive images every day

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to bitrex on Mon Jan 24 21:54:29 2022
    On 1/24/2022 1:39 PM, bitrex wrote:
    I run 32 GB on my main desktop since I upgraded to Ryzen 5 this year, which seems adequate for just about anything I throw at it.

    Depends, of course, on "what you throw at it". Most of my workstations
    have 144G of RAM, 5T of rust. My smallest (for writing software) has
    just 48G. The CAD, EDA and document prep workstations can easily eat
    gobs of RAM to avoid paging to disk. Some of my SfM "exercises" will
    eat every byte that's available!

    I'd be surprised if that Fujitsu can't be upgraded to at least 16.

    Another nice deal for mass storage/backups of work files are these surplus Dell
    H700 hardware RAID controllers, if you have a spare 4x or wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to be in servers probably but they work fine OOTB with Windows 10/11 and the modern Linux distros I've tried,
    and you don't have to muck with the OS software RAID or the motherboard's software RAID.

    RAID is an unnecessary complication. I've watched all of my peers dump
    their RAID configurations in favor of simple "copies" (RAID0 without
    the controller). Try upgrading a drive (to a larger size). Or,
    moving a drive to another machine (I have 6 identical workstations
    and can just pull the "sleds" out of one to move them to another
    machine if the first machine dies -- barring license issues).

    If you experience failures, then you assign value to the mechanism
    that protects against those failures. OTOH, if you *don't*, then
    there any costs associated with those mechanisms become the dominant
    factor in your usage decisions. I.e., if they make other "normal"
    activities (disk upgrades) more tedious, then that counts against
    them, nullifying their intended value.

    E.g., most folks experience PEBKAC failures which RAID won't prevent.
    Yet, still are lazy about backups (that could alleviate those failures).

    Yes a RAID array isn't a backup but I don't see any reason not to have your on-site backup in RAID 1.

    I use surplus "shelfs" as JBOD with a SAS controller. This allows me to
    also pull a drive from a shelf and install it directly in another machine without having to muck with taking apart an array, etc.

    Think about it, do you ever have to deal with a (perceived) "failure"
    when you have lots of *spare* time on your hands? More likely, you
    are in the middle of something and not keen on being distracted by
    a "maintenance" issue.

    [In the early days of the PC, I found having duplicate systems to be
    a great way to verify a problem was software related vs. a "machine
    problem": pull drive, install in identical machine and see if the
    same behavior manifests. Also good when you lose a power supply
    or some other critical bit of hardware and can work around it just by
    moving media (I keep 3 spare power supplies for my workstations
    as a prophylactic measure) :> ]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to bitrex on Mon Jan 24 21:39:00 2022
    On 1/24/2022 6:14 PM, bitrex wrote:
    It's not the last word in backup, why should I have to do any of that I just go
    get new modern controller and drives and restore from my off-site backup...

    Exactly. If your drives are "suspect", then why are you still using them?
    RAID is a complication that few folks really *need*.

    If you are using it, then you should feel 100.0% confident in taking
    a drive out of the array, deliberately scribbling on random sectors
    and then reinstalling in the array to watch it recover. A good exercise
    to remind you what the process will be like when/if it happens for real.
    (Just like doing an unnecessary "restore" from a backup).

    RAID (5+) is especially tedious (and wasteful) with large arrays.
    Each of my workstations has 5T spinning. Should I add another ~8T just
    to be sure that first 5T remains intact? Or, should I have another
    way of handling the (low probability) event of having to restore some "corrupted" (or, accidentally deleted?) portion of the filesystem?

    Image your system disk (and any media that host applications).
    Then, backup your working files semi-regularly.

    I've "lost" two drives in ~40 years: one in a laptop that I
    had configured as a 24/7/365 appliance (I'm guessing the drive
    didn't like spinning up and down constantly; I may have been
    able to prolong its life by NOT letting it spin down) and
    another drive that developed problems in the boot record
    (and was too small -- 160GB -- to bother trying to salvage).

    [Note that I have ~200 drives deployed, here]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bitrex@21:1/5 to Don Y on Tue Jan 25 00:30:41 2022
    On 1/24/2022 11:39 PM, Don Y wrote:
    On 1/24/2022 6:14 PM, bitrex wrote:
    It's not the last word in backup, why should I have to do any of that
    I just go get new modern controller and drives and restore from my
    off-site backup...

    Exactly.  If your drives are "suspect", then why are you still using them? RAID is a complication that few folks really *need*.

    If you are using it, then you should feel 100.0% confident in taking
    a drive out of the array, deliberately scribbling on random sectors
    and then reinstalling in the array to watch it recover.  A good exercise
    to remind you what the process will be like when/if it happens for real. (Just like doing an unnecessary "restore" from a backup).

    RAID (5+) is especially tedious (and wasteful) with large arrays.
    Each of my workstations has 5T spinning.  Should I add another ~8T just
    to be sure that first 5T remains intact?  Or, should I have another
    way of handling the (low probability) event of having to restore some "corrupted" (or, accidentally deleted?) portion of the filesystem?

    Image your system disk (and any media that host applications).
    Then, backup your working files semi-regularly.

    I've "lost" two drives in ~40 years:  one in a laptop that I
    had configured as a 24/7/365 appliance (I'm guessing the drive
    didn't like spinning up and down constantly; I may have been
    able to prolong its life by NOT letting it spin down) and
    another drive that developed problems in the boot record
    (and was too small -- 160GB -- to bother trying to salvage).

    [Note that I have ~200 drives deployed, here]

    The advantage I see in RAID-ing the system drive and projects drive is avoidance of downtime mainly; the machine stays usable while you prepare
    the restore solution.

    Enterprise situation you have other machines and a enterprise-class
    network and Internet connection to aid in this process, I have a small
    home office with one "business class" desktop PC and a consumer Internet connection, if there are a lot of files to restore the off-site backup
    place may have to mail you a disk.

    Ideally I don't have to do that either I just go to the local NAS nightly-backup but maybe lose some of the day's work if I only have one projects drive and it's failed. Not the worst thing but with a hot image
    you don't have to lose anything unless you're very unlucky and the
    second drive fails while you do an emergency sync.

    But particularly if the OS drive goes down it's very helpful to still
    have a usable desktop that can assist in its own recovery.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to bitrex on Mon Jan 24 23:29:10 2022
    On 1/24/2022 10:30 PM, bitrex wrote:

    Image your system disk (and any media that host applications).
    Then, backup your working files semi-regularly.

    I've "lost" two drives in ~40 years: one in a laptop that I
    had configured as a 24/7/365 appliance (I'm guessing the drive
    didn't like spinning up and down constantly; I may have been
    able to prolong its life by NOT letting it spin down) and
    another drive that developed problems in the boot record
    (and was too small -- 160GB -- to bother trying to salvage).

    [Note that I have ~200 drives deployed, here]

    The advantage I see in RAID-ing the system drive and projects drive is avoidance of downtime mainly; the machine stays usable while you prepare the restore solution.

    But, in practice, how often HAS that happened? And, *why*? I.e.,
    were you using old/shitty drives (and should have "known better")?
    How "anxious" will you be knowing that you are operating on a
    now faulted machine?

    Enterprise situation you have other machines and a enterprise-class network and
    Internet connection to aid in this process, I have a small home office with one
    "business class" desktop PC and a consumer Internet connection, if there are a
    lot of files to restore the off-site backup place may have to mail you a disk.

    Get a NAS/SAN. Or, "build" one using an old PC (that you have "outgrown"). Note that all you need the NAS/SAN/homegrown-solution to do is be "faster"
    than your off-site solution.

    Keep a laptop in the closet for times when you need to access the outside
    world while your primary machine is dead (e.g., to research the problem, download drivers, etc.)

    I have a little headless box that runs my DNS/TFTP/NTP/font/etc. services.
    It's a pokey little Atom @ 1.6GHz/4GB with a 500G laptop drive.
    Plenty fast enough for the "services" that it regularly provides.

    But, it's also "available" 24/7/365 (because the services that it provides
    are essential to EVERY machine in the house, regardless of the time of day I might choose to use them) so I can always push a tarball onto it to take a snapshot of whatever I'm working on at the time. (Hence the reason for such
    a large drive on what is actually just an appliance).

    Firing up a NAS/SAN is an extra step that I would tend to avoid -- because
    it's not normally up and running. By contrast, the little Atom box always
    *is* (so, let it serve double-duty as a small NAS).

    Ideally I don't have to do that either I just go to the local NAS nightly-backup but maybe lose some of the day's work if I only have one projects drive and it's failed. Not the worst thing but with a hot image you don't have to lose anything unless you're very unlucky and the second drive fails while you do an emergency sync.

    I see data as falling in several categories, each with different recovery costs:
    - the OS
    - applications
    - support "libraries"/collections
    - working files

    The OS is the biggest PITA to install/restore as it's installation often
    means other things that depend on it must subsequently be (re)installed.

    Applications represent the biggest time sink because each has licenses
    and configurations that need to be recreated.

    Libraries/collections tend to just be *large* but really just
    bandwidth limited -- they can be restored at any time to any
    machine without tweeks.

    Working files change the most frequently but, as a result, tend
    to be the freshest in your mind (what did you work on, today?).
    Contrast that with "how do you configure application X to work
    the way you want it to work and co-operate with application Y?"

    One tends not to do much "original work" in a given day -- the
    number of bytes that YOU directly change is small so you can
    preserve your day's efforts relatively easily (a lot MAY
    change on your machine but most of those bytes were changed
    by some *program* that responded to your small changes!).

    Backing up libraries/collections is just a waste of disk space;
    reinstalling from the originals (archive) takes just as long!

    Applications can be restored from an image created just after
    you installed the most recent application/update (this also
    gives you a clean copy of the OS).

    Restoring JUST the OS is useful if you are repurposing a
    machine and, thus, want to install a different set of
    applications. If you're at this point, you've likely got
    lots of manual work ahead of you as you select and install
    each of those apps -- before you can actually put them to use!

    I am religious about keeping *only* applications and OS on the
    "system disk". So, at any time, I can reinstall the image and
    know that I've not "lost anything" (of substance) in the process.
    Likewise, not letting applications creep onto non-system disks.

    This last bit is subtly important because you want to be
    able to *remove* a "non-system" disk and not impact the
    operation of that machine.

    [I've designed some "fonts" [sic] for use in my documents.
    Originally, I kept the fonts -- and the associated working
    files used to create them -- in a folder alongside those
    documents. On a non-system/working disk. Moving those
    documents/fonts is then "complicated" (not unduly so) because
    the system wants to keep a handle to the fonts hosted on it!]

    But particularly if the OS drive goes down it's very helpful to still have a usable desktop that can assist in its own recovery.

    Hence the laptop(s). Buy a SATA USB dock. It makes it a lot easier
    to use (and access) "bare" drives -- from *any* machine!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bitrex on Tue Jan 25 17:18:01 2022
    On 25/01/2022 02:03, bitrex wrote:
    On 1/24/2022 5:48 PM, David Brown wrote:
    On 24/01/2022 21:39, bitrex wrote:

    Another nice deal for mass storage/backups of work files are these
    surplus Dell H700 hardware RAID controllers, if you have a spare 4x or
    wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to be
    in servers probably but they work fine OOTB with Windows 10/11 and the
    modern Linux distros I've tried, and you don't have to muck with the OS
    software RAID or the motherboard's software RAID.

    Yes a RAID array isn't a backup but I don't see any reason not to have
    your on-site backup in RAID 1.


    You use RAID for three purposes, which may be combined - to get higher
    speeds (for your particular usage), to get more space (compared to a
    single drive), or to get reliability and better up-time in the face of
    drive failures.

    Yes, you should use RAID on your backups - whether it be a server with
    disk space for copies of data, or "manual RAID1" by making multiple
    backups to separate USB flash drives.  But don't imagine RAID is
    connected with "backup" in any way.


     From my experience with RAID, I strongly recommend you dump these kind
    of hardware RAID controllers.  Unless you are going for serious
    top-shelf equipment with battery backup, guaranteed response time by
    recovery engineers with spare parts and that kind of thing, use Linux
    software raid.  It is far more flexible, faster, more reliable and -
    most importantly - much easier to recover in the case of hardware
    failure.

    Any RAID system (assuming you don't pick RAID0) can survive a disk
    failure.  The important points are how you spot the problem (does your
    system send you an email, or does it just put on an LED and quietly beep
    to itself behind closed doors?), and how you can recover.  Your fancy
    hardware RAID controller card is useless when you find you can't get a
    replacement disk that is on the manufacturer's "approved" list from a
    decade ago.  (With Linux, you can use /anything/ - real, virtual, local,
    remote, flash, disk, whatever.)  And what do you do when the RAID card
    dies (yes, that happens) ?  For many cards, the format is proprietary
    and your data is gone unless you can find some second-hand replacement
    in a reasonable time-scale.  (With Linux, plug the drives into a new
    system.)

    I have only twice lost data from RAID systems (and had to restore them
    from backup).  Both times it was hardware RAID - good quality Dell and
    IBM stuff.  Those are, oddly, the only two hardware RAID systems I have
    used.  A 100% failure rate.

    (BSD and probably most other *nix systems have perfectly good software
    RAID too, if you don't like Linux.)

    I'm considering a hybrid scheme where the system partition is put on the
    HW controller in RAID 1, non-critical files but want fast access to like audio/video are on HW RAID 0, and the more critical long-term on-site
    mass storage that's not accessed too much is in some kind of software redundant-RAID equivalent, with changes synced to cloud backup service.

    That way you can boot from something other than the dodgy motherboard software-RAID but you're not dead in the water if the OS drive fails,
    and can probably use the remaining drive to create a today-image of the system partition to restore from.

    Worst-case you restore the system drive from your last image or from
    scratch if you have to, restoring the system drive from scratch isn't a crisis but it is seriously annoying, and most people don't do system
    drive images every day

    I'm sorry, but that sounds a lot like you are over-complicating things
    because you have read somewhere that "hardware raid is good", "raid 0 is
    fast", and "software raid is unreliable" - but you don't actually
    understand any of it. (I'm not trying to be insulting at all - everyone
    has limited knowledge that is helped by learning more.) Let me try to
    clear up a few misunderstandings, and give some suggestions.

    First, I recommend you drop the hardware controllers. Unless you are
    going for a serious high-end device with battery backup and the rest,
    and are happy to keep a spare card on-site, it will be less reliable,
    slower, less flexible and harder for recovery than Linux software RAID -
    by significant margins.

    (I've been assuming you are using Linux, or another *nix. If you are
    using Windows, then you can't do software raid properly and have far
    fewer options.)

    Secondly, audio and visual files do not need anything fast unless you
    are talking about ridiculous high quality video, or serving many clients
    at once. 4K video wants about 25 Mbps bandwidth - a spinning rust hard
    disk will usually give you about 150 MBps - about 60 times your
    requirement. Using RAID 0 will pointlessly increase your bandwidth
    while making the latency worse (especially with a hardware RAID card).

    Then you want other files on a software RAID with redundancy. That's
    fine, but you're whole system is now needing at least 6 drives and a specialised controller card when you could get better performance and
    better recoverability with 2 drives and software RAID.

    You do realise that Linux software RAID is unrelated to "motherboard RAID" ?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Don Y on Tue Jan 25 17:33:48 2022
    On 25/01/2022 05:54, Don Y wrote:
    On 1/24/2022 1:39 PM, bitrex wrote:
    I run 32 GB on my main desktop since I upgraded to Ryzen 5 this year,
    which seems adequate for just about anything I throw at it.

    Depends, of course, on "what you throw at it".  Most of my workstations
    have 144G of RAM, 5T of rust.  My smallest (for writing software) has
    just 48G.  The CAD, EDA and document prep workstations can easily eat
    gobs of RAM to avoid paging to disk.  Some of my SfM "exercises" will
    eat every byte that's available!

    I'd be surprised if that Fujitsu can't be upgraded to at least 16.

    Another nice deal for mass storage/backups of work files are these
    surplus Dell H700 hardware RAID controllers, if you have a spare 4x or
    wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to
    be in servers probably but they work fine OOTB with Windows 10/11 and
    the modern Linux distros I've tried, and you don't have to muck with
    the OS software RAID or the motherboard's software RAID.

    RAID is an unnecessary complication.  I've watched all of my peers dump their RAID configurations in favor of simple "copies" (RAID0 without
    the controller).  Try upgrading a drive (to a larger size).  Or,
    moving a drive to another machine (I have 6 identical workstations
    and can just pull the "sleds" out of one to move them to another
    machine if the first machine dies -- barring license issues).


    If you have only two disks, then it is much better to use one for an independent copy than to have them as RAID. RAID (not RAID0, which has
    no redundancy) avoids downtime if you have a hardware failure on a
    drive. But it does nothing to help user error, file-system corruption,
    malware attacks, etc. A second independent copy of the data is vastly
    better there.

    But the problems you mention are from hardware RAID cards. With Linux
    software raid you can usually upgrade your disks easily (full
    re-striping can take a while, but that goes in the background). You can
    move your disks to other systems - I've done that, and it's not a
    problem. Some combinations are harder for upgrades if you go for more
    advanced setups - such as striped RAID10 which can let you take two
    spinning rust disks and get lower latency and higher read throughout
    than a hardware RAID0 setup could possibly do while also having full
    redundancy (at the expense of slower writes).

    If you experience failures, then you assign value to the mechanism
    that protects against those failures.  OTOH, if you *don't*, then
    there any costs associated with those mechanisms become the dominant
    factor in your usage decisions.  I.e., if they make other "normal" activities (disk upgrades) more tedious, then that counts against
    them, nullifying their intended value.


    Such balances and trade-offs are important to consider. It sounds like
    you have redundancy from having multiple workstations - it's a lot more
    common to have a single workstation, and thus redundant disks can be a
    good idea.

    E.g., most folks experience PEBKAC failures which RAID won't prevent.
    Yet, still are lazy about backups (that could alleviate those failures).

    That is absolutely true - backups are more important than RAID.


    Yes a RAID array isn't a backup but I don't see any reason not to have
    your on-site backup in RAID 1.

    I use surplus "shelfs" as JBOD with a SAS controller.  This allows me to also pull a drive from a shelf and install it directly in another machine without having to muck with taking apart an array, etc.

    Think about it, do you ever have to deal with a (perceived) "failure"
    when you have lots of *spare* time on your hands?  More likely, you
    are in the middle of something and not keen on being distracted by
    a "maintenance" issue.

    Thus the minimised downtime you get from RAID is a good idea!


    [In the early days of the PC, I found having duplicate systems to be
    a great way to verify a problem was software related vs. a "machine problem":  pull drive, install in identical machine and see if the
    same behavior manifests.  Also good when you lose a power supply
    or some other critical bit of hardware and can work around it just by
    moving media (I keep 3 spare power supplies for my workstations
    as a prophylactic measure)  :> ]

    Having a few spare parts on-hand is useful.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bitrex@21:1/5 to bitrex on Tue Jan 25 15:51:01 2022
    On 1/25/2022 3:43 PM, bitrex wrote:

    Yes, the use cases are important, sorry for not mentioning it but I
    didn't expect to get into a discussion about it in the first place!
    Sometimes I stream many dozens of audio files simultaneously from disk e.g.

    <https://www.spitfireaudio.com/shop/a-z/bbc-symphony-orchestra-core/>

    Sequential read/write performance on a benchmark for two 2TB 7200 RPM
    drives (https://www.amazon.com/gp/product/B07H2RR55Q/) in RAID 0 on the
    Perc 700 controller seems rather good on Windows, approaching that of my
    OS SSD:

    <https://imgur.com/a/2svt7nY>

    Naturally the random 4k R/Ws suck. I haven't profiled it against the equivalent for Windows Storage Spaces.

    These are pretty consumer 7200 RPM drives too, not high-end by any means.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bitrex@21:1/5 to David Brown on Tue Jan 25 15:43:26 2022
    On 1/25/2022 11:18 AM, David Brown wrote:
    On 25/01/2022 02:03, bitrex wrote:
    On 1/24/2022 5:48 PM, David Brown wrote:
    On 24/01/2022 21:39, bitrex wrote:

    Another nice deal for mass storage/backups of work files are these
    surplus Dell H700 hardware RAID controllers, if you have a spare 4x or >>>> wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to be >>>> in servers probably but they work fine OOTB with Windows 10/11 and the >>>> modern Linux distros I've tried, and you don't have to muck with the OS >>>> software RAID or the motherboard's software RAID.

    Yes a RAID array isn't a backup but I don't see any reason not to have >>>> your on-site backup in RAID 1.


    You use RAID for three purposes, which may be combined - to get higher
    speeds (for your particular usage), to get more space (compared to a
    single drive), or to get reliability and better up-time in the face of
    drive failures.

    Yes, you should use RAID on your backups - whether it be a server with
    disk space for copies of data, or "manual RAID1" by making multiple
    backups to separate USB flash drives.  But don't imagine RAID is
    connected with "backup" in any way.


     From my experience with RAID, I strongly recommend you dump these kind >>> of hardware RAID controllers.  Unless you are going for serious
    top-shelf equipment with battery backup, guaranteed response time by
    recovery engineers with spare parts and that kind of thing, use Linux
    software raid.  It is far more flexible, faster, more reliable and -
    most importantly - much easier to recover in the case of hardware
    failure.

    Any RAID system (assuming you don't pick RAID0) can survive a disk
    failure.  The important points are how you spot the problem (does your
    system send you an email, or does it just put on an LED and quietly beep >>> to itself behind closed doors?), and how you can recover.  Your fancy
    hardware RAID controller card is useless when you find you can't get a
    replacement disk that is on the manufacturer's "approved" list from a
    decade ago.  (With Linux, you can use /anything/ - real, virtual, local, >>> remote, flash, disk, whatever.)  And what do you do when the RAID card
    dies (yes, that happens) ?  For many cards, the format is proprietary
    and your data is gone unless you can find some second-hand replacement
    in a reasonable time-scale.  (With Linux, plug the drives into a new
    system.)

    I have only twice lost data from RAID systems (and had to restore them
    from backup).  Both times it was hardware RAID - good quality Dell and
    IBM stuff.  Those are, oddly, the only two hardware RAID systems I have >>> used.  A 100% failure rate.

    (BSD and probably most other *nix systems have perfectly good software
    RAID too, if you don't like Linux.)

    I'm considering a hybrid scheme where the system partition is put on the
    HW controller in RAID 1, non-critical files but want fast access to like
    audio/video are on HW RAID 0, and the more critical long-term on-site
    mass storage that's not accessed too much is in some kind of software
    redundant-RAID equivalent, with changes synced to cloud backup service.

    That way you can boot from something other than the dodgy motherboard
    software-RAID but you're not dead in the water if the OS drive fails,
    and can probably use the remaining drive to create a today-image of the
    system partition to restore from.

    Worst-case you restore the system drive from your last image or from
    scratch if you have to, restoring the system drive from scratch isn't a
    crisis but it is seriously annoying, and most people don't do system
    drive images every day

    I'm sorry, but that sounds a lot like you are over-complicating things because you have read somewhere that "hardware raid is good", "raid 0 is fast", and "software raid is unreliable" - but you don't actually
    understand any of it. (I'm not trying to be insulting at all - everyone
    has limited knowledge that is helped by learning more.) Let me try to
    clear up a few misunderstandings, and give some suggestions.

    Well, Windows software raid is what it is and unfortunately on my main
    desktop I'm constrained to Windows.

    On another PC like if I build a NAS box myself I have other options.

    First, I recommend you drop the hardware controllers. Unless you are
    going for a serious high-end device with battery backup and the rest,
    and are happy to keep a spare card on-site, it will be less reliable,
    slower, less flexible and harder for recovery than Linux software RAID -
    by significant margins.

    It seems shocking that Linux software RAID could approach the
    performance of a late-model cached hardware controller that can spend
    it's entire existence optimizing the performance of that cache. But I
    don't know how to do the real-world testing for my own use-case to know.
    I think they probably compare well in benchmarks.

    (I've been assuming you are using Linux, or another *nix. If you are
    using Windows, then you can't do software raid properly and have far
    fewer options.)

    Not on my main desktop, unfortunately not. I run Linux on my laptops. If
    I built a second PC for a file server I would put Linux on it but my
    "NAS" backup is a dumb eSATA external drive at the moment

    Secondly, audio and visual files do not need anything fast unless you
    are talking about ridiculous high quality video, or serving many clients
    at once. 4K video wants about 25 Mbps bandwidth - a spinning rust hard
    disk will usually give you about 150 MBps - about 60 times your
    requirement. Using RAID 0 will pointlessly increase your bandwidth
    while making the latency worse (especially with a hardware RAID card).

    Yes, the use cases are important, sorry for not mentioning it but I
    didn't expect to get into a discussion about it in the first place!
    Sometimes I stream many dozens of audio files simultaneously from disk e.g.

    <https://www.spitfireaudio.com/shop/a-z/bbc-symphony-orchestra-core/>

    Sequential read/write performance on a benchmark for two 2TB 7200 RPM
    drives (https://www.amazon.com/gp/product/B07H2RR55Q/) in RAID 0 on the
    Perc 700 controller seems rather good on Windows, approaching that of my
    OS SSD:

    <https://imgur.com/a/2svt7nY>

    Naturally the random 4k R/Ws suck. I haven't profiled it against the
    equivalent for Windows Storage Spaces.


    Then you want other files on a software RAID with redundancy. That's
    fine, but you're whole system is now needing at least 6 drives and a specialised controller card when you could get better performance and
    better recoverability with 2 drives and software RAID.

    You do realise that Linux software RAID is unrelated to "motherboard RAID" ?

    Yep

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bitrex on Wed Jan 26 11:48:29 2022
    On 25/01/2022 21:43, bitrex wrote:
    On 1/25/2022 11:18 AM, David Brown wrote:
    On 25/01/2022 02:03, bitrex wrote:
    On 1/24/2022 5:48 PM, David Brown wrote:
    On 24/01/2022 21:39, bitrex wrote:

    Another nice deal for mass storage/backups of work files are these
    surplus Dell H700 hardware RAID controllers, if you have a spare 4x or >>>>> wider PCIe slot you get 8 channels of RAID 0/1 per card, the used
    to be
    in servers probably but they work fine OOTB with Windows 10/11 and the >>>>> modern Linux distros I've tried, and you don't have to muck with
    the OS
    software RAID or the motherboard's software RAID.

    Yes a RAID array isn't a backup but I don't see any reason not to have >>>>> your on-site backup in RAID 1.


    You use RAID for three purposes, which may be combined - to get higher >>>> speeds (for your particular usage), to get more space (compared to a
    single drive), or to get reliability and better up-time in the face of >>>> drive failures.

    Yes, you should use RAID on your backups - whether it be a server with >>>> disk space for copies of data, or "manual RAID1" by making multiple
    backups to separate USB flash drives.  But don't imagine RAID is
    connected with "backup" in any way.


      From my experience with RAID, I strongly recommend you dump these
    kind
    of hardware RAID controllers.  Unless you are going for serious
    top-shelf equipment with battery backup, guaranteed response time by
    recovery engineers with spare parts and that kind of thing, use Linux
    software raid.  It is far more flexible, faster, more reliable and -
    most importantly - much easier to recover in the case of hardware
    failure.

    Any RAID system (assuming you don't pick RAID0) can survive a disk
    failure.  The important points are how you spot the problem (does your >>>> system send you an email, or does it just put on an LED and quietly
    beep
    to itself behind closed doors?), and how you can recover.  Your fancy >>>> hardware RAID controller card is useless when you find you can't get a >>>> replacement disk that is on the manufacturer's "approved" list from a
    decade ago.  (With Linux, you can use /anything/ - real, virtual,
    local,
    remote, flash, disk, whatever.)  And what do you do when the RAID card >>>> dies (yes, that happens) ?  For many cards, the format is proprietary >>>> and your data is gone unless you can find some second-hand replacement >>>> in a reasonable time-scale.  (With Linux, plug the drives into a new
    system.)

    I have only twice lost data from RAID systems (and had to restore them >>>> from backup).  Both times it was hardware RAID - good quality Dell and >>>> IBM stuff.  Those are, oddly, the only two hardware RAID systems I have >>>> used.  A 100% failure rate.

    (BSD and probably most other *nix systems have perfectly good software >>>> RAID too, if you don't like Linux.)

    I'm considering a hybrid scheme where the system partition is put on the >>> HW controller in RAID 1, non-critical files but want fast access to like >>> audio/video are on HW RAID 0, and the more critical long-term on-site
    mass storage that's not accessed too much is in some kind of software
    redundant-RAID equivalent, with changes synced to cloud backup service.

    That way you can boot from something other than the dodgy motherboard
    software-RAID but you're not dead in the water if the OS drive fails,
    and can probably use the remaining drive to create a today-image of the
    system partition to restore from.

    Worst-case you restore the system drive from your last image or from
    scratch if you have to, restoring the system drive from scratch isn't a
    crisis but it is seriously annoying, and most people don't do system
    drive images every day

    I'm sorry, but that sounds a lot like you are over-complicating things
    because you have read somewhere that "hardware raid is good", "raid 0 is
    fast", and "software raid is unreliable" - but you don't actually
    understand any of it.  (I'm not trying to be insulting at all - everyone
    has limited knowledge that is helped by learning more.)  Let me try to
    clear up a few misunderstandings, and give some suggestions.

    Well, Windows software raid is what it is and unfortunately on my main desktop I'm constrained to Windows.

    OK.

    On desktop windows, "Intel motherboard RAID" is as good as it gets for increased reliability and update. It is more efficient than hardware
    raid, and the formats used are supported by any other motherboard and
    also by Linux md raid - thus if the box dies, you can connect the disks
    into a Linux machine (by SATA-to-USB converter or whatever is
    convenient) and have full access.

    Pure Windows software raid can only be used on non-system disks, AFAIK,
    though details vary between Windows versions.

    These days, however, you get higher reliability (and much higher speed)
    with just a single M2 flash disk rather than RAID1 of two spinning rust
    disks. Use something like Clonezilla to make a backup image of the disk
    to have a restorable system image.


    On another PC like if I build a NAS box myself I have other options.

    First, I recommend you drop the hardware controllers.  Unless you are
    going for a serious high-end device with battery backup and the rest,
    and are happy to keep a spare card on-site, it will be less reliable,
    slower, less flexible and harder for recovery than Linux software RAID -
    by significant margins.

    It seems shocking that Linux software RAID could approach the
    performance of a late-model cached hardware controller that can spend
    it's entire existence optimizing the performance of that cache. But I
    don't know how to do the real-world testing for my own use-case to know.
    I think they probably compare well in benchmarks.


    Shocking or not, that's the reality. (This is in reference to Linux md software raid - I don't know details of software raid on other systems.)

    There was a time when hardware raid cards we much faster, but many
    things have changed:

    1. It used to be a lot faster to do the RAID calculations (xor for
    RAID5, and more complex operations for RAID6) in dedicated ASICs than in processors. Now processors can handle these with a few percent usage of
    one of their many cores.

    2. Saturating the bandwidth of multiple disks used to require a
    significant proportion of the IO bandwidth of the processor and
    motherboard, so that having the data duplication for redundant RAID
    handled by a dedicated card reduced the load on the motherboard buses.
    Now it is not an issue - even with flash disks.

    3. It used to be that hardware raid cards reduced the latency for some
    accesses because they had dedicated cache memory (this was especially
    true for Windows, which has always been useless at caching disk data
    compared to Linux). Now with flash drives, the extra card /adds/ latency.

    4. Software raid can make smarter use of multiple disks, especially when reading. For a simple RAID1 (duplicate disks), a hardware raid card can
    only handle the reads as being from a single virtual disk. With
    software RAID1, the OS can coordinate accesses to all disks
    simultaneously, and use its knowledge of the real layout to reduce
    latencies.

    5. Hardware raid cards have very limited and fixed options for raid
    layout. Software raid can let you have options that give different
    balances for different needs. For a read-mostly layout on two disks,
    Linux raid10 can give you better performance than raid0 (hardware or
    software) while also having redundancy. <https://en.wikipedia.org/wiki/Non-standard_RAID_levels#LINUX-MD-RAID-10>



    (I've been assuming you are using Linux, or another *nix.  If you are
    using Windows, then you can't do software raid properly and have far
    fewer options.)

    Not on my main desktop, unfortunately not. I run Linux on my laptops. If
    I built a second PC for a file server I would put Linux on it but my
    "NAS" backup is a dumb eSATA external drive at the moment

    Secondly, audio and visual files do not need anything fast unless you
    are talking about ridiculous high quality video, or serving many clients
    at once.  4K video wants about 25 Mbps bandwidth - a spinning rust hard
    disk will usually give you about 150 MBps - about 60 times your
    requirement.  Using RAID 0 will pointlessly increase your bandwidth
    while making the latency worse (especially with a hardware RAID card).

    Yes, the use cases are important, sorry for not mentioning it but I
    didn't expect to get into a discussion about it in the first place!
    Sometimes I stream many dozens of audio files simultaneously from disk e.g.

    <https://www.spitfireaudio.com/shop/a-z/bbc-symphony-orchestra-core/>

    Sequential read/write performance on a benchmark for two 2TB 7200 RPM
    drives (https://www.amazon.com/gp/product/B07H2RR55Q/) in RAID 0 on the
    Perc 700 controller seems rather good on Windows, approaching that of my
    OS SSD:

    <https://imgur.com/a/2svt7nY>

    Naturally the random 4k R/Ws suck. I haven't profiled it against the equivalent for Windows Storage Spaces.


    SATA is limited to 500 MB/s. A good spinning rust can get up to about
    200 MB/s for continuous reads. RAID0 of two spinning rusts can
    therefore get fairly close to the streaming read speed of a SATA flash SSD.

    Note that a CD-quality uncompressed audio stream is 0.17 MB/s. 24-bit,
    192 kHz uncompressed is about 1 MB/s. That is, a /single/ spinning rust
    disk (with an OS that will cache sensibly) will handle nearly 200
    hundred such streams.


    Now for a little bit on prices, which I will grab from Newegg as a
    random US supplier, using random component choices and approximate
    prices to give a rough idea.

    2TB 7200rpm spinning rust - $50
    Perc H700 (if you can find one) - $150

    2TB 2.5" SSD - $150

    2TB M2 SSD - $170


    So for the price of your hardware raid card and two spinning rusts you
    could get, for example :

    1. An M2 SSD with /vastly/ higher speeds than your RAID0, higher
    reliability, and with a format that can be read on any modern computer
    (at most you might have to buy a USB-to-M2 adaptor ($13), rather than an outdated niche raid card).

    2. 4 spinning rusts in a software raid10 setup - faster, bigger, and
    better reliability.

    3. A 2.5" SSD and a spinning rust, connected in a Linux software RAID1
    pair with "write-behind" on the rust. You get the read latency benefits
    of the SSD, the combined streaming throughput of both, writes go first
    to the SSD and the slow rust write speed is not a bottleneck.

    There is no scenario in which hardware raid comes out on top, compared
    to Linux software raid. Even if I had the raid card and the spinning
    rust, I'd throw out the raid card and have a better result.



    Then you want other files on a software RAID with redundancy.  That's
    fine, but you're whole system is now needing at least 6 drives and a
    specialised controller card when you could get better performance and
    better recoverability with 2 drives and software RAID.

    You do realise that Linux software RAID is unrelated to "motherboard
    RAID" ?

    Yep


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From whit3rd@21:1/5 to bitrex on Wed Jan 26 12:04:47 2022
    On Tuesday, January 25, 2022 at 12:43:35 PM UTC-8, bitrex wrote:
    On 1/25/2022 11:18 AM, David Brown wrote:

    First, I recommend you drop the hardware controllers. Unless you are
    going for a serious high-end device...

    It seems shocking that Linux software RAID could approach the
    performance of a late-model cached hardware controller that can spend
    it's entire existence optimizing the performance of that cache.

    Not shocking at all; 'the performance' that matters is rarely similar to measured
    benchmarks. Even seasoned computer users can misunderstand their
    needs and multiply their overhead cost needlessly, to get improvement in operation.

    Pro photographers, sound engineering, and the occasional video edit shop
    will need one-user big fast disks, but in the modern market, the smaller and slower
    disks ARE big and fast, in absolute terms.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to All on Wed Jan 26 16:19:25 2022
    On 1/26/2022 1:04 PM, whit3rd wrote:
    Pro photographers, sound engineering, and the occasional video edit shop will need one-user big fast disks, but in the modern market, the smaller and slower
    disks ARE big and fast, in absolute terms.

    More importantly, they are very reliable. I come across thousands (literally) of scrapped machines (disks) every week. I've built a gizmo to wipe them and test them in the process. The number of "bad" disks is a tiny fraction; most of our discards are disks that we deem too small to bother with (250G or smaller).

    As most come out of corporate settings (desktops being consumer-quality
    while servers/arrays being enterprise), they tend to have high PoH figures... many exceeding 40K (4-5 years at 24/7). Still, no consequences to data integrity.

    Surely, if these IT departments feared for data on the thousands of
    seats they maintain, they would argue for the purchase of mechanisms
    to reduce that risk (as the IT department specs the devices, if they
    see high failure rates, all of their consumers will bitch about the
    choice that has been IMPOSED upon them!)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bitrex@21:1/5 to Don Y on Wed Jan 26 19:51:41 2022
    On 1/26/2022 6:19 PM, Don Y wrote:
    On 1/26/2022 1:04 PM, whit3rd wrote:
    Pro photographers,  sound engineering, and the occasional video edit shop >> will need one-user big fast disks, but in the modern market, the
    smaller and slower
    disks ARE big and fast, in absolute terms.

    More importantly, they are very reliable.  I come across thousands (literally)
    of scrapped machines (disks) every week.  I've built a gizmo to wipe
    them and
    test them in the process.  The number of "bad" disks is a tiny fraction; most
    of our discards are disks that we deem too small to bother with (250G or smaller).

    As most come out of corporate settings (desktops being consumer-quality
    while servers/arrays being enterprise), they tend to have high PoH
    figures...
    many exceeding 40K (4-5 years at 24/7).  Still, no consequences to data integrity.

    Surely, if these IT departments feared for data on the thousands of
    seats they maintain, they would argue for the purchase of mechanisms
    to reduce that risk (as the IT department specs the devices, if they
    see high failure rates, all of their consumers will bitch about the
    choice that has been IMPOSED upon them!)

    The oldest drive I still own, a 250 gig 7200 Barracudas, SMART tools
    reports has accumulated 64,447 power-on hours. It was still in regular
    use up until two years ago.

    It comes from a set of four I bought around 2007 I think. Two of them
    failed in the meantime and the other two...well I can't say I have much
    of a use for them at this point really, they're pretty slow anyway.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bitrex@21:1/5 to All on Wed Jan 26 20:24:13 2022
    On 1/26/2022 3:04 PM, whit3rd wrote:
    On Tuesday, January 25, 2022 at 12:43:35 PM UTC-8, bitrex wrote:
    On 1/25/2022 11:18 AM, David Brown wrote:

    First, I recommend you drop the hardware controllers. Unless you are
    going for a serious high-end device...

    It seems shocking that Linux software RAID could approach the
    performance of a late-model cached hardware controller that can spend
    it's entire existence optimizing the performance of that cache.

    Not shocking at all; 'the performance' that matters is rarely similar to measured
    benchmarks. Even seasoned computer users can misunderstand their
    needs and multiply their overhead cost needlessly, to get improvement in operation.

    Ya, the argument also seems to be it's wasteful to keep a couple spare
    $50 surplus HW RAID cards sitting around, but I should keep a few spare
    PCs sitting around instead.

    Ok...

    Pro photographers, sound engineering, and the occasional video edit shop will need one-user big fast disks, but in the modern market, the smaller and slower
    disks ARE big and fast, in absolute terms.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to bitrex on Wed Jan 26 18:43:00 2022
    On 1/26/2022 5:51 PM, bitrex wrote:
    On 1/26/2022 6:19 PM, Don Y wrote:
    On 1/26/2022 1:04 PM, whit3rd wrote:
    Pro photographers, sound engineering, and the occasional video edit shop >>> will need one-user big fast disks, but in the modern market, the smaller and
    slower
    disks ARE big and fast, in absolute terms.

    More importantly, they are very reliable. I come across thousands (literally)
    of scrapped machines (disks) every week. I've built a gizmo to wipe them and
    test them in the process. The number of "bad" disks is a tiny fraction; most
    of our discards are disks that we deem too small to bother with (250G or
    smaller).

    As most come out of corporate settings (desktops being consumer-quality
    while servers/arrays being enterprise), they tend to have high PoH figures...
    many exceeding 40K (4-5 years at 24/7). Still, no consequences to data
    integrity.

    Surely, if these IT departments feared for data on the thousands of
    seats they maintain, they would argue for the purchase of mechanisms
    to reduce that risk (as the IT department specs the devices, if they
    see high failure rates, all of their consumers will bitch about the
    choice that has been IMPOSED upon them!)

    The oldest drive I still own, a 250 gig 7200 Barracudas, SMART tools reports has accumulated 64,447 power-on hours. It was still in regular use up until two
    years ago.

    Look to the number of sector remap events to see if the *drive* thinks
    it's having problems. None of mine report any such events. (but, I
    only check on that stat irregularly)

    I have a 600 *M*B drive in my Compaq Portable 386 -- and that was tough
    to "fit" (cuz the BIOS didn't support anything that big). And a 340M in
    a box in case the 600 dies.

    I don't recall how the drive size in my Voyager -- but it would also
    be small (by today's standards).

    I see 1TB as a nominal drive size. Anything smaller is just used
    offline to store disk images (you can typically image a "nearly full"
    1TB drive on < 500GB)

    I have some 70G SCA 2.5" drives that I figure might come in handy, some
    day. (But, my patience is wearing thin and they may find themselves in
    the scrap pile, soon!)

    It comes from a set of four I bought around 2007 I think. Two of them failed in
    the meantime and the other two...well I can't say I have much of a use for them
    at this point really, they're pretty slow anyway.

    Slow is relative. To a 20MHz 386, you'd be surprised how "fast" an old
    drive can be! :>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to bitrex on Wed Jan 26 18:53:23 2022
    On 1/26/2022 6:24 PM, bitrex wrote:
    On 1/26/2022 3:04 PM, whit3rd wrote:
    On Tuesday, January 25, 2022 at 12:43:35 PM UTC-8, bitrex wrote:
    On 1/25/2022 11:18 AM, David Brown wrote:

    First, I recommend you drop the hardware controllers. Unless you are
    going for a serious high-end device...

    It seems shocking that Linux software RAID could approach the
    performance of a late-model cached hardware controller that can spend
    it's entire existence optimizing the performance of that cache.

    Not shocking at all; 'the performance' that matters is rarely similar to
    measured
    benchmarks. Even seasoned computer users can misunderstand their
    needs and multiply their overhead cost needlessly, to get improvement in
    operation.

    Ya, the argument also seems to be it's wasteful to keep a couple spare $50 surplus HW RAID cards sitting around, but I should keep a few spare PCs sitting
    around instead.

    I think the points are that:
    - EVERYONE has a spare laptop or desktop -- or *will* have one, RSN!
    - a spare machine can be used for different purposes other than the need
    for which it was originally purchased
    - RAID is of dubious value (I've watched each of my colleagues quietly
    abandon it after having this discussion years ago. Of course, there's
    always some "excuse" for doing so -- but, if they really WANTED to
    keep it, they surely could! I'll even offer my collection of RAID cards
    for them to choose a suitable replacement -- BBRAM caches, PATA, SATA,
    SCSI, SAS, etc. -- as damn near every server I've had came with
    such a card)

    Note that the physical size of the machine isn't even a factor in how
    it is used (think USB and FireWire). I use a tiny *netbook* to maintain
    my "distfiles" collection: connect it to the internet, plug the
    external drive that holds my current distfile collection and run a
    script that effectively rsync(8)'s with public repositories.

    My media tank is essentially a diskless workstation with a couple of
    USB3 drives hanging off of it.

    My DNS/NTP/TFTP/font/RDBMS/etc. server is another such workstation
    with a (laptop) disk drive cobbled inside.

    The biggest problem is finding inconspicuous places to hide such kit
    while being able to access them (to power them up/down, etc.)

    Ok...

    Pro photographers, sound engineering, and the occasional video edit shop
    will need one-user big fast disks, but in the modern market, the smaller and >> slower
    disks ARE big and fast, in absolute terms.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bitrex@21:1/5 to bitrex on Wed Jan 26 22:49:41 2022
    On 1/26/2022 10:40 PM, bitrex wrote:

    Hey as an aside did I mention how difficult it is to find a decent AMD micro-ITX motherboard that has two full-width PCIe slots in the first
    place? That also doesn't compromise access to the other PCIe 1x slot or
    each other when you install a GPU that takes up two slots.

    And the reason I put a micro-ITX in a full-tower case in the first place
    is that motherboards nowadays don't come with regular PCI slots anymore,
    but there are still PCI cards I want to use without having to keep a
    second old PC around that has them. But if you put full-size motherboard
    in a full tower there's nowhere to put an adapter riser to get them.

    Consumer desktop PC components nowadays are made for gamers. If you're
    not a gamer you either have to pay out the butt for "enterprise class"
    parts or try to kludge the gamer-parts into working for you.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bitrex@21:1/5 to Don Y on Wed Jan 26 22:40:29 2022
    On 1/26/2022 8:53 PM, Don Y wrote:
    On 1/26/2022 6:24 PM, bitrex wrote:
    On 1/26/2022 3:04 PM, whit3rd wrote:
    On Tuesday, January 25, 2022 at 12:43:35 PM UTC-8, bitrex wrote:
    On 1/25/2022 11:18 AM, David Brown wrote:

    First, I recommend you drop the hardware controllers. Unless you are >>>>> going for a serious high-end device...

    It seems shocking that Linux software RAID could approach the
    performance of a late-model cached hardware controller that can spend
    it's entire existence optimizing the performance of that cache.

    Not shocking at all; 'the performance' that matters is rarely similar
    to measured
    benchmarks.   Even seasoned computer users can misunderstand their
    needs and multiply their overhead cost needlessly, to get
    improvement  in operation.

    Ya, the argument also seems to be it's wasteful to keep a couple spare
    $50 surplus HW RAID cards sitting around, but I should keep a few
    spare PCs sitting around instead.

    I think the points are that:
    - EVERYONE has a spare laptop or desktop -- or *will* have one, RSN!
    - a spare machine can be used for different purposes other than the need
      for which it was originally purchased
    - RAID is of dubious value (I've watched each of my colleagues quietly
      abandon it after having this discussion years ago.  Of course, there's
      always some "excuse" for doing so -- but, if they really WANTED to
      keep it, they surely could!  I'll even offer my collection of RAID cards
      for them to choose a suitable replacement -- BBRAM caches, PATA, SATA,
      SCSI, SAS, etc. -- as damn near every server I've had came with
      such a card)

    I don't know of any more cost-effective solution on Windows that lets me
    have easily-expandable mass storage in quite the same way. And on RAID 0
    at least lets me push to the limit of the SATA bandwidth as I've shown
    is possible for saving and retrieving giant files, like sound libraries.

    A M2 SSD is fantastic but a 4 TB unit is about $500-700 per. With these
    HW cards with the onboard BIOS I just pop in more $50 drives if I want
    more space and it's set up for me automatically and transparently to the
    OS, with a few key-presses in the setup screen that it launches into automagically on boot if you hit control + R.

    A 2 M2 SSD unit is only about $170 as Mr. Brown says but I only have one
    M2 slot on my current motherboard, 2 PCIe slots, one of those is taken
    up by the GPU and you can maybe put one or two more on a PCIe adapter, I
    don't think it makes much sense to keep anything but the OS drive on the motherboard's M2 slot.

    Hey as an aside did I mention how difficult it is to find a decent AMD micro-ITX motherboard that has two full-width PCIe slots in the first
    place? That also doesn't compromise access to the other PCIe 1x slot or
    each other when you install a GPU that takes up two slots.

    You can't just put the GPU on any full-width slot either cuz if you read
    the fine print it usually says one of them only runs at 4x max if both
    slots are occupied, they aren't both really 16x if you use them both.

    I don't think a 4x PCIe slot can support two NVME drives in the first
    place. But a typical consumer micro-ITX motherboard still tends to come
    with 4 SATA ports which is nice, however if you also read the fine print
    it tends to say that if you use the onboard M2 slot at least two of the
    SATA ports get knocked out. Not so nice.

    I've been burned by using motherboard RAID before I won't go back that
    way for sure. I don't know what Mr. Brown means by "Intel motherboard
    RAID" I've never had any motherboard whose onboard soft-RAID was
    compatible with anything other than that manufacturer. I'm not nearly as concerned about what looks to be substantial well-designed Dell PCIe
    cards failing as I am about my motherboard failing frankly, consumer motherboards are shit!!! Next to PSUs motherboards are the most common
    failure I've experienced in my lifetime, they aren't reliable.

    Anyway, point to this rant is that the cost to get an equivalent amount
    of the new hotness in storage performance on a Windows desktop built
    with consumer parts starts increasing quickly, it's not really that
    cheap, and not particularly flexible.

    Note that the physical size of the machine isn't even a factor in how
    it is used (think USB and FireWire).  I use a tiny *netbook* to maintain
    my "distfiles" collection:  connect it to the internet, plug the
    external drive that holds my current distfile collection and run a
    script that effectively rsync(8)'s with public repositories.

    Y'all act like file systems are perfect they're not, I can find many
    horror stories about trying to restore ZFS partitions in Linux, also,
    and if it doesn't work perfectly the first time it looks like it's very
    helpful to be proficient with the Linux command line, which I ain't.

    My media tank is essentially a diskless workstation with a couple of
    USB3 drives hanging off of it.

    My DNS/NTP/TFTP/font/RDBMS/etc. server is another such workstation
    with a (laptop) disk drive cobbled inside.

    The biggest problem is finding inconspicuous places to hide such kit
    while being able to access them (to power them up/down, etc.)

    Right, I don't want to be a network administrator.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bitrex@21:1/5 to Don Y on Wed Jan 26 22:59:34 2022
    On 1/26/2022 8:43 PM, Don Y wrote:
    On 1/26/2022 5:51 PM, bitrex wrote:
    On 1/26/2022 6:19 PM, Don Y wrote:
    On 1/26/2022 1:04 PM, whit3rd wrote:
    Pro photographers,  sound engineering, and the occasional video edit
    shop
    will need one-user big fast disks, but in the modern market, the
    smaller and slower
    disks ARE big and fast, in absolute terms.

    More importantly, they are very reliable.  I come across thousands
    (literally)
    of scrapped machines (disks) every week.  I've built a gizmo to wipe
    them and
    test them in the process.  The number of "bad" disks is a tiny
    fraction; most
    of our discards are disks that we deem too small to bother with (250G or >>> smaller).

    As most come out of corporate settings (desktops being consumer-quality
    while servers/arrays being enterprise), they tend to have high PoH
    figures...
    many exceeding 40K (4-5 years at 24/7).  Still, no consequences to data >>> integrity.

    Surely, if these IT departments feared for data on the thousands of
    seats they maintain, they would argue for the purchase of mechanisms
    to reduce that risk (as the IT department specs the devices, if they
    see high failure rates, all of their consumers will bitch about the
    choice that has been IMPOSED upon them!)

    The oldest drive I still own, a 250 gig 7200 Barracudas, SMART tools
    reports has accumulated 64,447 power-on hours. It was still in regular
    use up until two years ago.

    Look to the number of sector remap events to see if the *drive* thinks
    it's having problems.  None of mine report any such events.  (but, I
    only check on that stat irregularly)

    See for yourself, I don't know what all of this means:

    <https://imgur.com/a/g0EOkNO>

    Got the power-on hours wrong before FBE1 = 64481.

    SMART still reports this drive as "Good"

    I have a 600 *M*B drive in my Compaq Portable 386 -- and that was tough
    to "fit" (cuz the BIOS didn't support anything that big).  And a 340M in
    a box in case the 600 dies.

    I don't recall how the drive size in my Voyager -- but it would also
    be small (by today's standards).

    I see 1TB as a nominal drive size.  Anything smaller is just used
    offline to store disk images (you can typically image a "nearly full"
    1TB drive on < 500GB)

    I have some 70G SCA 2.5" drives that I figure might come in handy, some day.  (But, my patience is wearing thin and they may find themselves in
    the scrap pile, soon!)

    It comes from a set of four I bought around 2007 I think. Two of them
    failed in the meantime and the other two...well I can't say I have
    much of a use for them at this point really, they're pretty slow anyway.

    Slow is relative.  To a 20MHz 386, you'd be surprised how "fast" an old drive can be!  :>

    Anyone make an ISA to SATA adapter card? Probably.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to bitrex on Thu Jan 27 00:09:50 2022
    On 1/26/2022 8:40 PM, bitrex wrote:
    I think the points are that:
    - EVERYONE has a spare laptop or desktop -- or *will* have one, RSN!
    - a spare machine can be used for different purposes other than the need
    for which it was originally purchased
    - RAID is of dubious value (I've watched each of my colleagues quietly
    abandon it after having this discussion years ago. Of course, there's
    always some "excuse" for doing so -- but, if they really WANTED to
    keep it, they surely could! I'll even offer my collection of RAID cards >> for them to choose a suitable replacement -- BBRAM caches, PATA, SATA,
    SCSI, SAS, etc. -- as damn near every server I've had came with
    such a card)

    I don't know of any more cost-effective solution on Windows that lets me have easily-expandable mass storage in quite the same way. And on RAID 0 at least lets me push to the limit of the SATA bandwidth as I've shown is possible for saving and retrieving giant files, like sound libraries.

    The easiest way to get more storage is with an external drive.

    With USB3, bandwidths are essentially limited by your motherboard
    and the drive. (Some USB2 implementations were strangled).

    I have *files* that are 50GB (why not?). So, file size isn't
    an issue.

    If you are careful in your choice of filesystem (and file naming
    conventions), you can move the medium to another machine hosted
    on a different OS.

    To make "moving" easier, connect the drive (USB or otherwise) to
    a small computer with a network interface. Then, export the
    drive as an SMB or NFS share, wrap a web interface around it, or
    access via FTP/etc.

    This is how my archive is built -- so I can access files from
    Windows machines, *BSD boxen, SPARCs -- even my ancient 386 portable
    (though bandwidth is sorely limited in that last case).

    A M2 SSD is fantastic but a 4 TB unit is about $500-700 per. With these HW cards with the onboard BIOS I just pop in more $50 drives if I want more space
    and it's set up for me automatically and transparently to the OS, with a few key-presses in the setup screen that it launches into automagically on boot if
    you hit control + R.

    But do you really need all that as on-line, "secondary storage"?
    And, if so, does it really need to be fast?

    My documentation preparation workstation has about 800GB of applications (related to preparing documentation). The other ~4T are libraries and collections of "building blocks" that I use in that process.

    E.g., if I want to create an animation showing some guy doing something,
    I find a suitable 3D model of a guy that looks kinda like I'd like him to
    look *in* those libraries. Along with any other "props" I'd like in
    this fictional world of his.

    But, once I've found the models that I want, those drives are just
    generating heat; any future accesses will be in my "playpen" and,
    likely, system RAM.

    OTOH, none of the other machines ever need to access that collection
    of 3D models. So, there's no value in my hosting them on a NAS -- that
    would mean the NAS had to be up in order for me to browse its contents: "Hmmm... this guy isn't working out as well as I'd hoped. Let me see if
    I can find an alternative..."

    (and, as it likely wouldn't have JUST stuff for this workstation,
    it would likely be larger to accommodate the specific needs of a variety
    of workstations).

    A 2 M2 SSD unit is only about $170 as Mr. Brown says but I only have one M2 slot on my current motherboard, 2 PCIe slots, one of those is taken up by the GPU and you can maybe put one or two more on a PCIe adapter, I don't think it makes much sense to keep anything but the OS drive on the motherboard's M2 slot.

    Hey as an aside did I mention how difficult it is to find a decent AMD micro-ITX motherboard that has two full-width PCIe slots in the first place? That also doesn't compromise access to the other PCIe 1x slot or each other when you install a GPU that takes up two slots.

    Try finding motherboards that can support half a dozen drives, two dual-slot GPUs *and* several other slots (for SAS HBA, SCSI HBA, etc.).

    You can't just put the GPU on any full-width slot either cuz if you read the fine print it usually says one of them only runs at 4x max if both slots are occupied, they aren't both really 16x if you use them both.

    I don't think a 4x PCIe slot can support two NVME drives in the first place. But a typical consumer micro-ITX motherboard still tends to come with 4 SATA ports which is nice, however if you also read the fine print it tends to say that if you use the onboard M2 slot at least two of the SATA ports get knocked
    out. Not so nice.

    I've been burned by using motherboard RAID before I won't go back that way for
    sure. I don't know what Mr. Brown means by "Intel motherboard RAID" I've never
    had any motherboard whose onboard soft-RAID was compatible with anything other
    than that manufacturer. I'm not nearly as concerned about what looks to be substantial well-designed Dell PCIe cards failing as I am about my motherboard
    failing frankly, consumer motherboards are shit!!!

    The downside of any RAID is you are tied to the implementation.
    I used to run a 15 slot RAID array. PITA moving volumes, adding
    volumes, etc.

    Now:
    # disklabel -I -e sdX
    ; edit as appropriate *or* copy from another similarly sized volume
    # newfs /dev/rsdXa
    ; no need for more than one "partition" on a drive!
    # mount /dev/sdXa /mountpoint
    # tar/cp/rcp/rsync/whatever
    ; copy files onto volume
    # updatearchive /mountpoint
    ; update database of volume's contents and their hashes
    # umount /mountpoint
    ; put volume on a shelf until further need

    Next to PSUs motherboards
    are the most common failure I've experienced in my lifetime, they aren't reliable.

    I've never lost one. But, all of mine have been Dell & HP boxes.

    Anyway, point to this rant is that the cost to get an equivalent amount of the
    new hotness in storage performance on a Windows desktop built with consumer parts starts increasing quickly, it's not really that cheap, and not particularly flexible.

    I don't see the problem with using external drives (?)
    If you're stuck with USB2, add a USB3 or Firewire card.

    And, if your storage is effectively "offline" in terms of
    frequency of access, just put the drive on a shelf until
    needed. A few (three?) decades ago, I had a short 24U
    rack populated with several DEC storage arrays. Eventually,
    I discarded the rack (silly boy! :< ) and all but one of the
    arrays. I moved all of the drives -- each in its own little
    "module" -- onto a closet shelf with a label affixed.
    When needed, fire up the array (shelf) and insert the drive(s)
    of interest, access as appropriate, then shut everything down.

    I use a similar approach, today, with five of these: <http://www.itinstock.com/ekmps/shops/itinstock/images/dell-powervault-md1000-15-bay-drive-storage-array-san-with-15x-300gb-3.5-15k-sas-[2]-47121-p.jpg>
    though I've kept the actual arrays (store the drives *in* the array)
    as they serve double duty as my prototype "disk sanitizer" (wipe
    60 drives at a time)

    Most times, I don't need 15+ drives spinning. So, I pull the sleds for
    the drives of interest and install them in a small 4-bay server (the
    storage arrays are noisey as hell!) that automatically exports them to
    the machines on my LAN.

    I have a pair of different arrays configured as a SAN for my ESXi
    server so each of it's (24) 2T drives holds VMDKs for different
    virtual machine "emulations". But, those are only needed if
    the files I want aren't already present on any of the 8 drives
    in the ESXi host.

    I've got several 8T (consumer) drives placed around one of my PCs
    with various contents. Each physical drive has an adhesive label
    telling me what it hosts. Find USB cord for drive, plug in wall
    wart, plug in USB cable, wait for drive to spin up. Move files
    to/from medium. Reverse process to spin it back down. This is
    a bit easier than the storage arrays so that's where I keep my
    more frequently accessed "off-line" content.

    E.g., I recently went through my MP3 and FLAC libraries
    cleaning up filenames, tags, album art, etc. I pulled
    all of the content onto a workstation (from one of these
    external drives), massaged it as appropriate, then pushed
    it back onto the original medium (and updated my database
    so the hashes were current... this last step not necessary
    for other folks).

    *Lots* of ways to give yourself extra storage. Bigger
    problem is keeping track of all of it! Like having
    a room full of file cabinets and wondering where "that"
    particular file was placed :<

    Note that the physical size of the machine isn't even a factor in how
    it is used (think USB and FireWire). I use a tiny *netbook* to maintain
    my "distfiles" collection: connect it to the internet, plug the
    external drive that holds my current distfile collection and run a
    script that effectively rsync(8)'s with public repositories.

    Y'all act like file systems are perfect they're not, I can find many horror stories about trying to restore ZFS partitions in Linux, also, and if it doesn't work perfectly the first time it looks like it's very helpful to be proficient with the Linux command line, which I ain't.

    Why would you be using ZFS? That's RAID for masochists. Do those
    gold audio cables make a difference in your listening experience?
    If not, why bother with them?! What's ZFS going to give you -- besides bragging rights?

    Do you have a problem with media *failures*? (not PEBKAC) If the answer
    is "no", then live with "simple volumes". This makes life *so* much
    easier as you don't have to remember any special procedures to
    create new volumes, add volumes, remove volumes, etc.

    If I install N drives in a machine and power it up, I will see N
    mount points: /0 ... /N. If I don't see a volume backing a
    particular mount point, then there must be something wrong with that
    drive (did I ever bother to format it? does it host some oddball
    filesystem? did I fail to fully insert it?)

    My media tank is essentially a diskless workstation with a couple of
    USB3 drives hanging off of it.

    My DNS/NTP/TFTP/font/RDBMS/etc. server is another such workstation
    with a (laptop) disk drive cobbled inside.

    The biggest problem is finding inconspicuous places to hide such kit
    while being able to access them (to power them up/down, etc.)

    Right, I don't want to be a network administrator.

    Anyone who can't set up an appliance, nowadays, is a dinosaur.
    The same applies to understanding how a "simple network" works
    and is configured.

    I spend no time "administering" my network. Every box has a
    static IP -- so I know where it "should be" in my local
    address space. And a name. Add each new host to the NTP
    configuration so it's clock remains in sync with the rest.
    Decide which other services you want to support.

    Configure *once*. Forget. (but, leave notes as to how you
    do these things so you can add another host, 6 mos from now)

    The "problems" SWMBO calls on me to fix are: "The printer
    is broken!" "No, it's out of PAPER! See this little light,
    here? See this paper drawer? See this half-consumed ream
    of paper? This is how they work together WITH YOU to solve
    your 'problem'..."

    Of course, I've learned to avoid certain MS services that
    have proven to be unreliable or underperformant. But,
    that's worthwhile insight gained (soas not to be bitten later)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to bitrex on Thu Jan 27 00:26:50 2022
    On 1/26/2022 8:59 PM, bitrex wrote:
    Slow is relative. To a 20MHz 386, you'd be surprised how "fast" an old
    drive can be! :>

    Anyone make an ISA to SATA adapter card? Probably.

    The 386 portable has no internal slots. I *do* have the expansion chassis
    (a large "bag" that bolts on the back) -- but it only supports two slots
    and I have a double-wide card that usually sits in there.

    A better move is a PATA-SATA adapter (I think I have some of those...
    or, maybe they are SCA-SCSI or FW or... <shrug>)

    But, you're still stuck with the limitations of the old BIOS -- which
    had no "user defined" disk geometry (and dates from the days when the
    PC told the drive what it's geometry would be). To support the
    600M drive, I had to edit the BIOS EPROMs (yes, that's how old it is!)
    and fix the checksum so my changes didn't signal a POST fault.

    [Or, maybe it was 340M like the "spare" I have? I can't recall.
    Booting it to check would be tedious as the BBRAM is failed (used
    a large battery for that, back then, made in Israel, IIRC). And,
    "setup" resides on a 5" floppy just to get in and set the parameters...
    Yes, only of value to collectors! But, a small footprint way for me
    to support a pair of ISA slots!]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dave Platt@21:1/5 to user@example.net on Fri Jan 28 12:49:42 2022
    In article <HMoIJ.359537$aF1.247113@fx98.iad>,
    bitrex <user@example.net> wrote:

    Look to the number of sector remap events to see if the *drive* thinks
    it's having problems.  None of mine report any such events.  (but, I
    only check on that stat irregularly)

    See for yourself, I don't know what all of this means:

    <https://imgur.com/a/g0EOkNO>

    Got the power-on hours wrong before FBE1 = 64481.

    SMART still reports this drive as "Good"

    That's how the numbers in that report look to me.

    The raw read-error rate is very low, and hasn't ever been anything
    other than very low.

    The drive has never been unable to read data from one of its sectors.
    It has never felt a need to declare a sector "bad" (e.g. required too
    many read retries), and move its data to a spare sector. There are no
    sectors which are "pending" that sort of reallocation. Hardware-level error-correction-code data recoveries (e.g. low-level bit errors during
    read, fully corrected by the ECC) seem quite reasonable.

    It seems to be spinning up reliably when power comes on.

    It does appear to have gotten hot than it wanted to be, at some point
    (the on-a-scale-of-100 "airflow temperature" value was below
    threshold). Might want to check the fans and filters and make sure
    there's enough air flowing past the drive.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)