• [gentoo-user] Getting maximum space out of a hard drive

    From Dale@21:1/5 to All on Thu Aug 18 20:10:02 2022
    Howdy,

    I got my 10TB drive in today.  I want to maximize the amount of data I
    can put on this thing and it remain stable.  I know about -m 0 when
    making the file system but was wondering if there is any other tips or
    tricks to make the most of the drive space.  This is the output of cgdisk.


    Part. #     Size        Partition Type            Partition Name
    ----------------------------------------------------------------             1007.0 KiB  free space
       1        9.1 TiB     Linux filesystem          10Tb             1007.5 KiB  free space


    I'm not sure why there seems to be two alignment spots.  Is that
    normal?  Already, there is almost 1TB lost somewhere.  Any way to
    increase that and still be safe?  Right now, I've ran the short test and
    it is chewing on the long test.  It will be done around 7AM tomorrow, 19
    or 20 hours to complete.  As it is, there's no data on it or even a file system either.  Now is the time to tweak things. 

    Any tips or ideas would be appreciated. 

    Dale

    :-)  :-) 

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andreas Fink@21:1/5 to Dale on Thu Aug 18 20:30:01 2022
    On Thu, 18 Aug 2022 13:04:57 -0500
    Dale <rdalek1967@gmail.com> wrote:

    Howdy,

    I got my 10TB drive in today.  I want to maximize the amount of data I
    can put on this thing and it remain stable.  I know about -m 0 when
    making the file system but was wondering if there is any other tips or
    tricks to make the most of the drive space.  This is the output of cgdisk.


    Part. #     Size        Partition Type            Partition Name
    ----------------------------------------------------------------             1007.0 KiB  free space
       1        9.1 TiB     Linux filesystem          10Tb             1007.5 KiB  free space


    I'm not sure why there seems to be two alignment spots.  Is that
    normal?  Already, there is almost 1TB lost somewhere.  Any way to
    increase that and still be safe?  Right now, I've ran the short test and
    it is chewing on the long test.  It will be done around 7AM tomorrow, 19
    or 20 hours to complete.  As it is, there's no data on it or even a file system either.  Now is the time to tweak things. 

    Any tips or ideas would be appreciated. 

    Dale

    :-)  :-) 


    Ah yes, the good old harddisk marketing size calculating in base 1000,
    while TiB is in base 1024.
    In short:
    1TB=1000^4 != 1TiB=1024^4

    Do the math yourself, what 10TB should be in TiB, but it's in the
    ballpark of 9.1TiB ;)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Freeman@21:1/5 to rdalek1967@gmail.com on Thu Aug 18 20:20:01 2022
    On Thu, Aug 18, 2022 at 2:04 PM Dale <rdalek1967@gmail.com> wrote:


    Part. # Size Partition Type Partition Name ----------------------------------------------------------------
    1007.0 KiB free space
    1 9.1 TiB Linux filesystem 10Tb
    1007.5 KiB free space


    I'm not sure why there seems to be two alignment spots. Is that
    normal? Already, there is almost 1TB lost somewhere.

    10 TB = 9.09495 TiB. You aren't missing much of anything.

    And no, I don't want to get into a religious war over base 2 vs base
    10, and why it would be confusing if a tape that could store 10MB/m
    didn't store 10kB/mm but instead stored 10.24 kB/mm.

    --
    Rich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dale@21:1/5 to Rich Freeman on Fri Aug 19 04:10:01 2022
    Rich Freeman wrote:
    On Thu, Aug 18, 2022 at 2:04 PM Dale <rdalek1967@gmail.com> wrote:

    Part. # Size Partition Type Partition Name
    ----------------------------------------------------------------
    1007.0 KiB free space
    1 9.1 TiB Linux filesystem 10Tb
    1007.5 KiB free space


    I'm not sure why there seems to be two alignment spots. Is that
    normal? Already, there is almost 1TB lost somewhere.
    10 TB = 9.09495 TiB. You aren't missing much of anything.

    And no, I don't want to get into a religious war over base 2 vs base
    10, and why it would be confusing if a tape that could store 10MB/m
    didn't store 10kB/mm but instead stored 10.24 kB/mm.



    Well, I realize it would be less than advertised but I just want to
    maximize it as much as I can.  I found the -m option for the file system
    a good while back and it saves a lot on these larger drives.  Since this
    is a external drive, no point in reserving any root space, since root
    will likely never access it after the file system is put on it. 

    Nice to know it is a conversion thing going on tho.  ;-)

    Dale

    :-)  :-) 

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Haller@21:1/5 to Dale on Fri Aug 19 06:30:01 2022
    Hello,

    On Thu, 18 Aug 2022, Dale wrote:
    Rich Freeman wrote:
    On Thu, Aug 18, 2022 at 2:04 PM Dale <rdalek1967@gmail.com> wrote:

    Part. # Size Partition Type Partition Name
    1007.0 KiB free space
    1 9.1 TiB Linux filesystem 10Tb
    1007.5 KiB free space


    I'm not sure why there seems to be two alignment spots. Is that
    normal? Already, there is almost 1TB lost somewhere.
    10 TB = 9.09495 TiB. You aren't missing much of anything.
    [..]
    Well, I realize it would be less than advertised but I just want to
    maximize it as much as I can. I found the -m option for the file system
    a good while back and it saves a lot on these larger drives. Since this
    is a external drive, no point in reserving any root space, since root
    will likely never access it after the file system is put on it.

    Also, if you're using ext2/3/4, there's the preset, i.e. if you're
    rather sure about what kind of data is going to be on there, you
    can tune it so that it reserves more or less place for metadata like
    inodes, which can be another bit.

    I made some experiments with a temp-repurposed swapfile of 2051M size:

    Output of 'df -m':
    1M-blocks Used Available Inodes mke2fs-options used
    2016 67 1847 131072 -j -t ext4
    2016 67 1949 131072 -j -t ext4 -m 0
    2048 67 1878 2048 -j -t ext4 -T largefile
    2048 67 1981 2048 -j -t ext4 -T largefile -m 0

    So, defaults uses about 1.7% of the space for metadata, and -T
    largefile only about 0.15% of the space. Of course, there are rather
    few inodes with '-T largefile'. But if you want to put basically only
    some big videos on there, 2048 inodes seems a lot for a mere 2G of
    space ;) This should scale linearly (in steps) for bigger devices and
    can amount to quite some more space.

    Anyway, see /etc/mke2fs.conf, 'man mke2fs' and 'man mke2fs.conf for
    details.

    I've done this in the past and got bitten by too few inodes, but you
    can get around that for "inode-hogs" like news-spools etc. by using a loop-filesystem with different parameters or a different fs. Just
    beware: reiserfs on reiserfs is a recipie for desaster.

    HTH,
    -dnh

    --
    There are two major products that come out of Berkeley:
    LSD and UNIX. We don't believe this to be a coincidence.
    -- Jeremy S. Anderson

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dale@21:1/5 to All on Sat Aug 20 21:20:01 2022
    Howdy,

    Related question.  Does encryption slow the read/write speeds of a drive
    down a fair amount?  This new 10TB drive is maxing out at about
    49.51MB/s or so.  I actually copied that from the progress of rsync and
    a nice sized file.  It's been running over 24 hours now so I'd think
    buffer and cache would be well done with.  LOL 

    It did pass both a short and long self test.  I used cryptsetup -s 512
    to encrypt with, nice password too.  My rig has a FX-8350 8 core running
    at 4GHz CPU and 32GBs of memory.  The CPU is fairly busy.  A little more
    than normal anyway.  Keep in mind, I have two encrypted drives connected
    right now. 

    Just curious if that speed is normal or not. 

    Thoughts?

    Dale

    :-)  :-) 

    P. S.  The pulled drive I bought had like 60 hours on it.  Dang near new. 

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Freeman@21:1/5 to rdalek1967@gmail.com on Sat Aug 20 23:00:02 2022
    On Sat, Aug 20, 2022 at 3:15 PM Dale <rdalek1967@gmail.com> wrote:

    Related question. Does encryption slow the read/write speeds of a drive
    down a fair amount? This new 10TB drive is maxing out at about
    49.51MB/s or so.

    Encryption won't impact the write speeds themselves of course, but it
    could introduce a CPU bottleneck. If you don't have any cores pegged
    at 100% though I'd say this isn't happening. On x86 encrypting a hard
    drive shouldn't be a problem. I have seen it become a bottleneck on
    something like a Pi4 if the encryption isn't directly supported in
    hardware by the CPU.

    50MB/s is reasonable if you have an IOPS-limited workload. It is of
    course a bit low for something that is bandwidth-limited. If you want
    to test that I'm not sure rsync is a great way to go. I'd pause that
    (ctrl-z is fine), then verify that all disk IO goes to zero (might
    take 30s to clear out the cache). Then I'd use "time dd bs=1M
    count=20000 if=/dev/zero of=/path/to/drive/test" to measure how long
    it takes to create a 20GB file. Oh, this assumes you're not using a
    filesystem that can detect all-zeros and compress or make the file
    sparse. If you get crazy-fast results then I'd do a test like copying
    a single large file with cp and timing that.

    Make sure your disk has no IO before testing. If you have two
    processes accessing at once then you're going to get a huge drop in
    performance on a spinning disk. That includes one writing process and
    one reading one, unless the reads all hit the cache.

    --
    Rich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to Dale on Sat Aug 20 23:50:01 2022
    On 8/20/22 1:15 PM, Dale wrote:
    Howdy,

    Hi,

    Related question. Does encryption slow the read/write speeds of a
    drive down a fair amount?

    m

    This new 10TB drive is maxing out at about 49.51MB/s or so. I actually copied that from the progress of rsync and a nice sized file.
    It's been running over 24 hours now so I'd think buffer and cache
    would be well done with. LOL

    It did pass both a short and long self test. I used cryptsetup -s
    512 to encrypt with, nice password too. My rig has a FX-8350 8 core
    running at 4GHz CPU and 32GBs of memory. The CPU is fairly busy.
    A little more than normal anyway. Keep in mind, I have two encrypted
    drives connected right now.

    Just curious if that speed is normal or not.

    Thoughts?

    Dale

    :-) :-)

    P. S. The pulled drive I bought had like 60 hours on it. Dang near
    new.





    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to Dale on Sun Aug 21 00:00:01 2022
    Sorry for the duplicate post. I had an email client error that
    accidentally caused me to hit send on the window I was composing in.

    On 8/20/22 1:15 PM, Dale wrote:
    Howdy,

    Hi,

    Related question. Does encryption slow the read/write speeds of a
    drive down a fair amount?

    My experience has been the opposite. I know that it's unintuitive that encryption would make things faster. But my understanding is that it
    alters how data is read from / written to the disk such that it's done
    in more optimized batches and / or optimized caching.

    This was so surprising that I decrypted a drive / re-encrypted a drive
    multiple times to compare things to come to the conclusion that
    encryption was noticeably better.

    Plus, encryption has the advantage of destroying the key rendering the
    drive safe to use independent of the data that was on it.

    N.B. The actual encryption key is encrypted with the passphrase. The passphrase isn't the encryption key itself.

    This new 10TB drive is maxing out at about 49.51MB/s or so.

    I wonder if you are possibly running into performance issues related to shingled drives. Their raw capacity comes at a performance penalty.

    I actually copied that from the progress of rsync and a nice sized
    file. It's been running over 24 hours now so I'd think buffer and
    cache would be well done with. LOL

    Ya, you have /probably/ exceeded the write back cache in the system's
    memory.

    It did pass both a short and long self test.  I used cryptsetup -s 512
    to encrypt with, nice password too.  My rig has a FX-8350 8 core running
    at 4GHz CPU and 32GBs of memory.  The CPU is fairly busy.  A little more than normal anyway.  Keep in mind, I have two encrypted drives connected right now.

    The last time I looked at cryptsetup / LUKS, I found that there was a
    [kernel] process per encrypted block device.

    A hack that I did while testing things was to slice up a drive into
    multiple partitions, encrypt each one, and then re-aggregate the LUKS
    devices as PVs in LVM. This surprisingly was a worthwhile performance
    boost.

    Just curious if that speed is normal or not.

    I suspect that your drive is FAR more the bottleneck than the encryption
    itself is. There is a chance that the encryption's access pattern is exascerbating a drive performance issue.

    Thoughts?

    Conceptually working in 512 B blocks on a drive that is natively 4 kB
    sectors. Thus causing the drive to do lots of extra work to account for
    the other seven 512 B blocks in a 4 kB sector.

    P. S.  The pulled drive I bought had like 60 hours on it.  Dang near new.

    :-)



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dale@21:1/5 to Grant Taylor on Sun Aug 21 00:50:01 2022
    Grant Taylor wrote:
    Sorry for the duplicate post.  I had an email client error that
    accidentally caused me to hit send on the window I was composing in.

    I figured it was something like that.  ;-)


    On 8/20/22 1:15 PM, Dale wrote:
    Howdy,

    Hi,

    Related question.  Does encryption slow the read/write speeds of a
    drive down a fair amount?

    My experience has been the opposite.  I know that it's unintuitive
    that encryption would make things faster.  But my understanding is
    that it alters how data is read from / written to the disk such that
    it's done in more optimized batches and / or optimized caching.

    This was so surprising that I decrypted a drive / re-encrypted a drive multiple times to compare things to come to the conclusion that
    encryption was noticeably better.

    Plus, encryption has the advantage of destroying the key rendering the
    drive safe to use independent of the data that was on it.

    N.B. The actual encryption key is encrypted with the passphrase.  The passphrase isn't the encryption key itself.

    This new 10TB drive is maxing out at about 49.51MB/s or so.

    I wonder if you are possibly running into performance issues related
    to shingled drives.  Their raw capacity comes at a performance penalty.

    This drive is not supposed to be SMR.  It's a 10TB and according to a
    site I looked on, none of them are SMR, yet.  I found another site that
    said it was CMR.  So, pretty sure it isn't SMR.  Nothing is 100% tho.  I might add, it's been at about that speed since I started the backup.  If
    you have a better source of info, it's a WD model WD101EDBZ-11B1DA0 drive. 



    I actually copied that from the progress of rsync and a nice sized
    file.  It's been running over 24 hours now so I'd think buffer and
    cache would be well done with.  LOL

    Ya, you have /probably/ exceeded the write back cache in the system's
    memory.

    It did pass both a short and long self test.  I used cryptsetup -s 512
    to encrypt with, nice password too.  My rig has a FX-8350 8 core running
    at 4GHz CPU and 32GBs of memory.  The CPU is fairly busy.  A little more >> than normal anyway.  Keep in mind, I have two encrypted drives connected
    right now.

    The last time I looked at cryptsetup / LUKS, I found that there was a [kernel] process per encrypted block device.

    A hack that I did while testing things was to slice up a drive into
    multiple partitions, encrypt each one, and then re-aggregate the LUKS
    devices as PVs in LVM.  This surprisingly was a worthwhile performance boost.

    I noticed there is a kcrypt something thread running, a few actually but
    it's hard to keep up since I see it on gkrellm's top process list.  The
    CPU is running at about 40% or so average but I do have mplayer, a
    couple Firefox profiles, Seamonkey and other stuff running as well.  I
    still got plenty of CPU pedal left if needed.  Having Ktorrent and
    qbittorrent running together isn't helping.  Thinking of switching
    torrent software.  Qbit does seem to use more memory tho. 



    Just curious if that speed is normal or not.

    I suspect that your drive is FAR more the bottleneck than the
    encryption itself is.  There is a chance that the encryption's access pattern is exascerbating a drive performance issue.

    Thoughts?

    Conceptually working in 512 B blocks on a drive that is natively 4 kB sectors.  Thus causing the drive to do lots of extra work to account
    for the other seven 512 B blocks in a 4 kB sector.

    I think the 512 has something to do with key size or something.  Am I
    wrong on that?  If I need to use 256 or something, I can.  My
    understanding was that 512 was stronger than 256 as far as the
    encryption goes. 



    P. S.  The pulled drive I bought had like 60 hours on it.  Dang near
    new.

    :-)

    I'm going to try some tests Rich mentioned after it is done doing its
    backup.  I don't want to stop it if I can avoid it.  It's about half way through, give or take a little. 

    Dale

    :-)  :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From William Kenworthy@21:1/5 to Dale on Sun Aug 21 06:30:01 2022
    What are you measuring the speed with - hdparm or rsync or ?

    hdparm is best for profiling just the harddisk (tallks to the interface
    and can bypass the cache depending on settings, rsync/cp/?? usually have
    the whole OS storage chain including encryption affecting throughput.  Encryption itself can be highly variable depending on what you use and
    usually though not always includes compression before encryption.  There
    are tools you can use to isolate where the slowdown occurs.  atop is
    another one that may help.

    [test using a USB3 shingled drive on a 32 it arm system]

    xu4 ~ # hdparm -Tt /dev/sda
    /dev/sda:
     Timing cached reads:   1596 MB in  2.00 seconds = 798.93 MB/sec
     Timing buffered disk reads: 526 MB in  3.01 seconds = 174.99 MB/sec
    xu4 ~ #

    BillK

    On 21/8/22 06:45, Dale wrote:
    Grant Taylor wrote:
    Sorry for the duplicate post.  I had an email client error that
    accidentally caused me to hit send on the window I was composing in.
    I figured it was something like that.  ;-)

    On 8/20/22 1:15 PM, Dale wrote:
    Howdy,
    Hi,

    Related question.  Does encryption slow the read/write speeds of a
    drive down a fair amount?
    My experience has been the opposite.  I know that it's unintuitive
    that encryption would make things faster.  But my understanding is
    that it alters how data is read from / written to the disk such that
    it's done in more optimized batches and / or optimized caching.

    This was so surprising that I decrypted a drive / re-encrypted a drive
    multiple times to compare things to come to the conclusion that
    encryption was noticeably better.

    Plus, encryption has the advantage of destroying the key rendering the
    drive safe to use independent of the data that was on it.

    N.B. The actual encryption key is encrypted with the passphrase.  The
    passphrase isn't the encryption key itself.

    This new 10TB drive is maxing out at about 49.51MB/s or so.
    I wonder if you are possibly running into performance issues related
    to shingled drives.  Their raw capacity comes at a performance penalty.
    This drive is not supposed to be SMR.  It's a 10TB and according to a
    site I looked on, none of them are SMR, yet.  I found another site that
    said it was CMR.  So, pretty sure it isn't SMR.  Nothing is 100% tho.  I might add, it's been at about that speed since I started the backup.  If
    you have a better source of info, it's a WD model WD101EDBZ-11B1DA0 drive.


    I actually copied that from the progress of rsync and a nice sized
    file.  It's been running over 24 hours now so I'd think buffer and
    cache would be well done with.  LOL
    Ya, you have /probably/ exceeded the write back cache in the system's
    memory.

    It did pass both a short and long self test.  I used cryptsetup -s 512
    to encrypt with, nice password too.  My rig has a FX-8350 8 core running >>> at 4GHz CPU and 32GBs of memory.  The CPU is fairly busy.  A little more >>> than normal anyway.  Keep in mind, I have two encrypted drives connected >>> right now.
    The last time I looked at cryptsetup / LUKS, I found that there was a
    [kernel] process per encrypted block device.

    A hack that I did while testing things was to slice up a drive into
    multiple partitions, encrypt each one, and then re-aggregate the LUKS
    devices as PVs in LVM.  This surprisingly was a worthwhile performance
    boost.
    I noticed there is a kcrypt something thread running, a few actually but
    it's hard to keep up since I see it on gkrellm's top process list.  The
    CPU is running at about 40% or so average but I do have mplayer, a
    couple Firefox profiles, Seamonkey and other stuff running as well.  I
    still got plenty of CPU pedal left if needed.  Having Ktorrent and qbittorrent running together isn't helping.  Thinking of switching
    torrent software.  Qbit does seem to use more memory tho.


    Just curious if that speed is normal or not.
    I suspect that your drive is FAR more the bottleneck than the
    encryption itself is.  There is a chance that the encryption's access
    pattern is exascerbating a drive performance issue.

    Thoughts?
    Conceptually working in 512 B blocks on a drive that is natively 4 kB
    sectors.  Thus causing the drive to do lots of extra work to account
    for the other seven 512 B blocks in a 4 kB sector.
    I think the 512 has something to do with key size or something.  Am I
    wrong on that?  If I need to use 256 or something, I can.  My
    understanding was that 512 was stronger than 256 as far as the
    encryption goes.


    P. S.  The pulled drive I bought had like 60 hours on it.  Dang near
    new.
    :-)
    I'm going to try some tests Rich mentioned after it is done doing its backup.  I don't want to stop it if I can avoid it.  It's about half way through, give or take a little.

    Dale

    :-)  :-)


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to Dale on Sun Aug 21 07:30:01 2022
    On 8/20/22 4:45 PM, Dale wrote:
    I figured it was something like that. ;-)

    :-)

    This drive is not supposed to be SMR. It's a 10TB and according to a
    site I looked on, none of them are SMR, yet. I found another site that
    said it was CMR. So, pretty sure it isn't SMR. Nothing is 100% tho.
    I might add, it's been at about that speed since I started the backup.
    If you have a better source of info, it's a WD model WD101EDBZ-11B1DA0
    drive.

    I am so far from an authority and wouldn't know anything better than a
    web search for manufacturer's documents.

    I noticed there is a kcrypt something thread running, a few actually
    but it's hard to keep up since I see it on gkrellm's top process list.
    The CPU is running at about 40% or so average but I do have mplayer,
    a couple Firefox profiles, Seamonkey and other stuff running as well.
    I still got plenty of CPU pedal left if needed. Having Ktorrent and qbittorrent running together isn't helping. Thinking of switching
    torrent software. Qbit does seem to use more memory tho.

    Ya, the number of things hitting the drive will impact performance. The
    type of requests will also impact things. In my limited experience,
    lots of little requests seem to be harder for a drive than fewer but
    bigger requests.

    I think the 512 has something to do with key size or something.
    Am I wrong on that? If I need to use 256 or something, I can.
    My understanding was that 512 was stronger than 256 as far as the
    encryption goes.

    Agreed. At least that's the quick look at the cryptsetup man page on
    line showed me. But I suspect the underlying concept may still stand,
    even if the particular parameter in your previous message is not related.

    I'm going to try some tests Rich mentioned after it is done doing
    its backup. I don't want to stop it if I can avoid it. It's about
    half way through, give or take a little.

    :-)



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Taylor@21:1/5 to William Kenworthy on Sun Aug 21 07:40:01 2022
    On 8/20/22 10:22 PM, William Kenworthy wrote:
    What are you measuring the speed with - hdparm or rsync or ?

    hdparm is best for profiling just the harddisk (tallks to the interface
    and can bypass the cache depending on settings, rsync/cp/?? usually have
    the whole OS storage chain including encryption affecting throughput.

    How you measure performance is a complicated thing. There is the raw
    device speed verses the speed of the system under normal load while
    interacting with the drive.

    At $WORK, we are more concerned about throughput of the drive in our day
    to day use case than drive's raw capacity.

    Encryption itself can be highly variable depending on what you use and usually though not always includes compression before encryption.

    Compression can be a very tricky thing. There's the time to decompress
    and compress the data as it's read and written (respectively). Then
    there's the throughput of data to the drive and through the drive to the
    media. If you're dealing with text that can get a high compression
    ratio with little CPU overhead, then there's a good chance that you will
    get more data into / out of the drive faster if it's compressed than at
    the same bit speed decompressed.

    To whit I enabled compression on my ZFS pools a long time ago and never
    looked back.

    There are tools you can use to isolate where the slowdown occurs.
    atop is another one that may help.

    Yep.

    [test using a USB3 shingled drive on a 32 it arm system]

    Is that an Odroid XU4 system? If so, why 32-bit vs 64-bit? -- Or am I mistaken in thinking the Odroid XU4 is 64-bit?

    xu4 ~ # hdparm -Tt /dev/sda
    /dev/sda:
     Timing cached reads:   1596 MB in  2.00 seconds = 798.93 MB/sec
     Timing buffered disk reads: 526 MB in  3.01 seconds = 174.99 MB/sec
    xu4 ~ #

    If that is an Odroid XU4, then I strongly suspect that /dev/sda is
    passing through a USB interface. So ... I'd take those numbers with a
    grain of salt. -- If the system is working for you, then by all means
    more power to you.

    I found that my Odroid XU4 was /almost/ fast enough to be my daily
    driver. But the fan would kick in for some things and I didn't care for
    the noise of the stock fan. I've not yet compared contemporary
    Raspberry Pi 4 or other comparable systems.



    --
    Grant. . . .
    unix || die

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From William Kenworthy@21:1/5 to Grant Taylor on Sun Aug 21 11:30:01 2022
    This is a multi-part message in MIME format.
    On 21/8/22 13:34, Grant Taylor wrote:
    On 8/20/22 10:22 PM, William Kenworthy wrote:
    ...

    If that is an Odroid XU4, then I strongly suspect that /dev/sda is
    passing through a USB interface.  So ... I'd take those numbers with a
    grain of salt.  --  If the system is working for you, then by all
    means more power to you.

    I found that my Odroid XU4 was /almost/ fast enough to be my daily
    driver.  But the fan would kick in for some things and I didn't care
    for the noise of the stock fan.  I've not yet compared contemporary Raspberry Pi 4 or other comparable systems.


    Samsung Exynos 5422 is developed on the 28 nm technology node and
    architecture Cortex-A15 / Cortex-A7. Its base clock speed is 1.40 GHz,
    and maximum clock speed in turbo boost - 2.10 GHz. Samsung Exynos 5422
    contains 8 processing cores.

    Instruction set (ISA) ARMv7-A32 (32 bit)
    Architecture Cortex-A15 / Cortex-A7


    Yes, its an xu4 and as I mentioned, its a USB drive (seagate 4G backup
    with an SMR inside) - works ok as a backup drive and the data transfer
    is fast until you fill the cache - then its throughput is best
    described as "miserable"!  The xu4 lists as 32bit and odroid supplies
    a 32 bit kernel etc - I just used their config as a base when building
    gentoo onto it - its my build (for 5 xu4 based HC2 systems) and hosts
    the backup drive.  My attaching the hdparm run was an example of its
    use, and that happened to be the terminal i was using at the time.

    BillK


    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
    </head>
    <body>
    <p><br>
    </p>
    <div class="moz-cite-prefix">On 21/8/22 13:34, Grant Taylor wrote:<br>
    </div>
    <blockquote type="cite" cite="mid:6f5161c7-3a9e-a4c0-82c5-3d0052da4e54@spamtrap.tnetconsulting.net">On
    8/20/22 10:22 PM, William Kenworthy wrote:
    <br>
    ...<br>
    </blockquote>
    <br>
    <blockquote type="cite" cite="mid:6f5161c7-3a9e-a4c0-82c5-3d0052da4e54@spamtrap.tnetconsulting.net">If
    that is an Odroid XU4, then I strongly suspect that /dev/sda is
    passing through a USB interface.  So ... I'd take those numbers
    with a grain of salt.  --  If the system is working for you, then
    by all means more power to you.
    <br>
    <br>
    I found that my Odroid XU4 was /almost/ fast enough to be my daily
    driver.  But the fan would kick in for some things and I didn't
    care for the noise of the stock fan.  I've not yet compared
    contemporary Raspberry Pi 4 or other comparable systems.
    <br>
    </blockquote>
    <p><br>
    </p>
    <p>Samsung Exynos 5422 is developed on the 28 nm technology node and
    architecture Cortex-A15 / Cortex-A7. Its base clock speed is 1.40
    GHz, and maximum clock speed in turbo boost - 2.10 GHz. Samsung
    Exynos 5422 contains 8 processing cores.</p>
    <blockquote type="cite" cite="mid:6f5161c7-3a9e-a4c0-82c5-3d0052da4e54@spamtrap.tnetconsulting.net">
    <table class="table table-hover">
    <tbody>
    <tr>
    <td class="text-muted">Instruction set (ISA)</td>
    <td>ARMv7-A32 (32 bit)</td>
    </tr>
    <tr>
    <td class="text-muted">Architecture</td>
    <td>Cortex-A15 / Cortex-A7</td>
    </tr>
    </tbody>
    </table>
    <br>
    Yes, its an xu4 and as I mentioned, its a USB drive (seagate 4G
    backup with an SMR inside) - works ok as a backup drive and the
    data transfer is fast until you fill the cache - then its
    throughput is best described as "miserable"!  The xu4 lists as
    32bit and odroid supplies a 32 bit kernel etc - I just used their
    config as a base when building gentoo onto it - its my build (for
    5 xu4 based HC2 systems) and hosts the backup drive.  My attaching
    the hdparm run was an example of its use, and that happened to be
    the terminal i was using at the time.<br>
    </blockquote>
    <p>BillK</p>
    <p><br>
    </p>
    </body>
    </html>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dale@21:1/5 to William Kenworthy on Sun Aug 21 12:10:02 2022
    William Kenworthy wrote:
    What are you measuring the speed with - hdparm or rsync or ?

    hdparm is best for profiling just the harddisk (tallks to the
    interface and can bypass the cache depending on settings, rsync/cp/??
    usually have the whole OS storage chain including encryption affecting throughput.  Encryption itself can be highly variable depending on
    what you use and usually though not always includes compression before encryption.  There are tools you can use to isolate where the slowdown occurs.  atop is another one that may help.

    [test using a USB3 shingled drive on a 32 it arm system]

    xu4 ~ # hdparm -Tt /dev/sda
    /dev/sda:
     Timing cached reads:   1596 MB in  2.00 seconds = 798.93 MB/sec
     Timing buffered disk reads: 526 MB in  3.01 seconds = 174.99 MB/sec
    xu4 ~ #

    BillK


    I copied that from a fair sized file in rsync's progress output.  I just picked one that was the highest in the last several files that were on
    the screen, without scrolling back.  No file system with compression
    since compressing video files doesn't help much.  Just ext4 on encrypted
    LVM on a single partition. 

    I tell you tho, this new drive is filling up pretty darn fast.  I got to
    build a NAS or something here.  Thing is, how to put it somewhere it is protected and all.  A NAS won't exactly fit in my fire safe.  :/  Bigger fire safe maybe????  o_O 

    Dale

    :-)  :-) 

    P. S.  Just made three more jars of pepper sauce.  Must have that to go
    with peas and cornbread.  :-D 

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dale@21:1/5 to Dale on Sun Aug 21 18:50:01 2022
    Dale wrote:
    William Kenworthy wrote:
    What are you measuring the speed with - hdparm or rsync or ?

    hdparm is best for profiling just the harddisk (tallks to the
    interface and can bypass the cache depending on settings, rsync/cp/??
    usually have the whole OS storage chain including encryption affecting
    throughput.  Encryption itself can be highly variable depending on
    what you use and usually though not always includes compression before
    encryption.  There are tools you can use to isolate where the slowdown
    occurs.  atop is another one that may help.

    [test using a USB3 shingled drive on a 32 it arm system]

    xu4 ~ # hdparm -Tt /dev/sda
    /dev/sda:
     Timing cached reads:   1596 MB in  2.00 seconds = 798.93 MB/sec
     Timing buffered disk reads: 526 MB in  3.01 seconds = 174.99 MB/sec
    xu4 ~ #

    BillK

    I copied that from a fair sized file in rsync's progress output.  I just picked one that was the highest in the last several files that were on
    the screen, without scrolling back.  No file system with compression
    since compressing video files doesn't help much.  Just ext4 on encrypted
    LVM on a single partition. 

    I tell you tho, this new drive is filling up pretty darn fast.  I got to build a NAS or something here.  Thing is, how to put it somewhere it is protected and all.  A NAS won't exactly fit in my fire safe.  :/  Bigger fire safe maybe????  o_O 

    Dale

    :-)  :-) 



    Well, 2.5 days later, first backup done.  Then I had to restart to
    update the changes made in the past couple days that rsync didn't
    catch.  When that got done, I wanted to close the drive and unhook it
    but I'm getting that 'device in use' message.  Well, after some digging,
    I found that extlazyinit process running and if memory serves me, that
    is the process that creates the file system in the background.  I ran
    into that before.  I think it was copying the files as fast as it was
    able to create the file system to put it on.  I'll know next time I do backups.  If this thing ever lets me disconnect the drive.  Oh.

    Filesystem                       Size  Used Avail Use% Mounted on
    /dev/mapper/10tb           9.1T  7.5T  1.6T  83% /mnt/10tb

    I don't see that lasting to long.  :/  Yup, gotta come up with a plan. 

    Dale

    :-)  :-) 

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Freeman@21:1/5 to lperkins@openeye.net on Mon Aug 22 17:10:01 2022
    On Mon, Aug 22, 2022 at 10:50 AM Laurence Perkins <lperkins@openeye.net> wrote:

    Note that 60ish MB/sec is very reasonable for a rotational drive. They *can* technically go faster, but only if you keep the workload almost entirely sequential. Most filesystems require a fair amount of seeking to write metadata, which slows them
    down quite a bit.

    If you're desperate for performance, you can do things like tell it to ignore write barriers and turn off various bits of flushing and increase the amount of allowed dirty write cache. These can be good for a significant performance boost at the cost
    of almost certainly corrupting the filesystem if the system loses power or crashes.


    I've also found that on large drives the sequential write speed varies
    based on position in the drive. If I run something like badblocks on
    a new hard drive I'll see it start out at something like 200MB/s, and
    by the end it is around100MB/s. Then at the start of the next pass it
    will jump back up to 200MB/s. This is just direct block-level
    sequential writes so it is an ideal use case.

    As you say, ANY seeking will dramatically reduce the throughput. Time
    spent seeking is time not spent writing. There is no opportunity to
    "catch up" as the drive's read/write bandwidth is basically just a
    function of the recording density and rotational rate and number of platters/etc being read in parallel. If it is seeking it is a lost
    opportunity to read/write.

    --
    Rich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Wol@21:1/5 to Frank Steinmetzger on Thu Aug 25 01:00:01 2022
    On 24/08/2022 23:39, Frank Steinmetzger wrote:
    That’s a WD Red Plus. WD introduced the Plus series after the SMR debacle do
    differentiate between the „now normal“ WD Reds which can (or maybe always)
    have SMR and the Plus, which are always CMR.

    Yup. The new reds are always SMR, the Red Pluses are CMR.

    Cheers,
    Wol

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Steinmetzger@21:1/5 to All on Thu Aug 25 00:40:01 2022
    Am Sat, Aug 20, 2022 at 05:45:18PM -0500 schrieb Dale:

    This new 10TB drive is maxing out at about 49.51MB/s or so.

    For a new 3.5″ drive, I find this quite slow, even for the slowest part near the centre of the spindle. I tend to use hdparm for a quick info, but that’s been mentioned in another reply already.

    I wonder if you are possibly running into performance issues related
    to shingled drives.  Their raw capacity comes at a performance penalty.

    This drive is not supposed to be SMR. […] If you have a better source of info, it's a WD model WD101EDBZ-11B1DA0 drive. 

    That’s a WD Red Plus. WD introduced the Plus series after the SMR debacle do differentiate between the „now normal“ WD Reds which can (or maybe always) have SMR and the Plus, which are always CMR.

    Conceptually working in 512 B blocks on a drive that is natively 4 kB sectors.  Thus causing the drive to do lots of extra work to account
    for the other seven 512 B blocks in a 4 kB sector.

    I think the 512 has something to do with key size or something.  Am I
    wrong on that?  If I need to use 256 or something, I can.  My
    understanding was that 512 was stronger than 256 as far as the
    encryption goes. 

    Yeah, we are talking about two different kinds of blocks. You have the disk block size, the encryption block size and the file system block size. (I
    call them all block size here, but they may have more appropriate names).

    I think the most important thing is to have the FS block size match the
    drive, because in the end, the FS is what sends writes out. The encryption layer is transparent underneath, it simply transforms the bit values, but
    not their location. Disclaimer: that is pure speculation on my part based on common sense.

    --
    Grüße | Greetings | Salut | Qapla’
    Please do not share anything from, with or about me on any social network.

    “A Melmacian almost never goes back on his word sometimes.” – Alf

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmMGqIIACgkQizG+tUDU MMqS9BAAmwphL00K5DO+s96mll/a/8mMIEfm4bsd+4ragRtKslFQ4n5XX/upyuCL UsdsOUog9FtGndNIwJ1EGLiSzAfJo+jVlGwS85x87hr1TYnDqRKmmNu2rNlSd2Qy ffAcE84CoCWVjsUdY4rBBgSACTgwBbv7wZ08aKal/KHIHqHzp8UVmj58ZcQKU7K1 m/ZsmARquTSFMvUt+YuCWABxDznqKX0fFejyPXB2erGwlXu8eTGb9ATxwmCcV6UU MpA4aCHaT+5GA0Egjqab/D6SbCYo9vctRpu6OirnkQ3Ep33Gp9vCx5KWCAJgcV5p 74fBwcNoxa3Bh7fbnifPC0q7aCywE8Tp737F1ebuPn07Mv0gdmC9gfUDWBrw5TfP eiEI5iYedwH3Oj6Y35W9I2BdOWG/rE3bWvCz80JuIUNnKhKgWjWv6u1wQwCInf8s zy9AgW4PslW4SpfVMiewF+ngTy8dOuD8rthnaXyEO0t9gYh04Ju0prgr1VM4VGi3 7MygTfW4XSneJL22Rhikf6nHLLodywcrbNbUMpJKCPxqw5/vgKMk8lVuhb/96vmo Lo+KSfwHbIQbtrDGCrTJFxaZZOSoqz6oZIxWAqPKquolnoi6l7Kt8S47C8DROkgs 90Pb5nEiuY4/foAGmMT1/PMDKwhr/bxTeu70KzloAq6XzWNRBEA=
    =rAgj
    -----E
  • From Frank Steinmetzger@21:1/5 to All on Thu Aug 25 00:50:01 2022
    Am Fri, Aug 19, 2022 at 06:26:14AM +0200 schrieb David Haller:
    Hello,

    On Thu, 18 Aug 2022, Dale wrote:
    Rich Freeman wrote:
    On Thu, Aug 18, 2022 at 2:04 PM Dale <rdalek1967@gmail.com> wrote:

    Part. # Size Partition Type Partition Name
    1007.0 KiB free space
    1 9.1 TiB Linux filesystem 10Tb
    1007.5 KiB free space


    I'm not sure why there seems to be two alignment spots. Is that
    normal? Already, there is almost 1TB lost somewhere.
    10 TB = 9.09495 TiB. You aren't missing much of anything.
    [..]
    Also, if you're using ext2/3/4, there's the preset, i.e. if you're
    rather sure about what kind of data is going to be on there, you
    can tune it so that it reserves more or less place for metadata like
    inodes, which can be another bit.

    When I format a partition (and I usually use ext4, with some f2fs mingled in
    on flash bashed devices), I always set the inode count myself, because the default was always much too high. Like 15 m on a 40 GiB partition or so. My arch root partition has 2 m inodes in total, 34 % of which are in use for a full-fledged KDE setup. That’s sufficient.

    On Gentoo, I might give it some more for the ever-growing portage directory. But even a few percent on a 10 TB drive amount to many gigabytes.

    --
    Grüße | Greetings | Salut | Qapla’
    Please do not share anything from, with or about me on any social network.

    A hammer is a wonderful tool,
    but it is plain unsuitable for cleaning windows. (SelfHTML forum)

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmMGqhEACgkQizG+tUDU MMr5VBAAkPfXQpByEfzjRKe+ACARhfHWSYVN7v15SlUUyxwlUmaw+fiK0M+feZjs NKnx00nG4zwrwl4rCBDiCwiKFHE3Y7+O+N9q92wzvsc7setzTSbT3YUuxWMGqn0Y hFGVltf551CQQ6wJZZSY18to2YosJOuh7JTsOsrZJ4vZ5nTShrl2wGNjg+IkaeBk X3/Yes0y0s2pdolKiBoi2YfYMFmVLjqJyVHAv7346wyfg3nSAW4jyEGb74NDRjdL IXfdQ22vrXcgJKeZVsYOYM2/1yjsNtuDRZPUp8pF5J8xEJOC4mDiGVLtfp7dUU7B fzxcc2jQ+/H/zYUfBp7qf62NvSe+axGYzNl/ta07cFA4+i5dBf8x2fxZB2ak6+SN OI6pxPeEBsGO/g5In6s/u5YtFGrN0iJ4csuOIiMUkoMmeIBtjtfUOElfSJ5wuQ1w MBAfuw4ofEt36PkwoITsXSuNwILx9mdfD4I6n87WpIkdrcaI0Yy7vksDcn9RhCsS FRTVCEK4qrxLv8iYYON3MRuJOrse8KRhZQDl5IPjFbtp+l1pCyxAjQVz6XbW91TV epJRUPUcMA6G9QAFfnkEyAMn3kQcrsCbnvbghp2UsAjCtJSBRjAPAiYJ2lkr9X2U 4x6gMjdBlwJVHY5Qgf3h2yaVwyEjrDyQH9pmR3vJV
  • From Dale@21:1/5 to Rich Freeman on Thu Aug 25 05:50:01 2022
    Rich Freeman wrote:
    On Sat, Aug 20, 2022 at 3:15 PM Dale <rdalek1967@gmail.com> wrote:
    Related question. Does encryption slow the read/write speeds of a drive
    down a fair amount? This new 10TB drive is maxing out at about
    49.51MB/s or so.
    Encryption won't impact the write speeds themselves of course, but it
    could introduce a CPU bottleneck. If you don't have any cores pegged
    at 100% though I'd say this isn't happening. On x86 encrypting a hard
    drive shouldn't be a problem. I have seen it become a bottleneck on
    something like a Pi4 if the encryption isn't directly supported in
    hardware by the CPU.

    50MB/s is reasonable if you have an IOPS-limited workload. It is of
    course a bit low for something that is bandwidth-limited. If you want
    to test that I'm not sure rsync is a great way to go. I'd pause that
    (ctrl-z is fine), then verify that all disk IO goes to zero (might
    take 30s to clear out the cache). Then I'd use "time dd bs=1M
    count=20000 if=/dev/zero of=/path/to/drive/test" to measure how long
    it takes to create a 20GB file. Oh, this assumes you're not using a filesystem that can detect all-zeros and compress or make the file
    sparse. If you get crazy-fast results then I'd do a test like copying
    a single large file with cp and timing that.

    Make sure your disk has no IO before testing. If you have two
    processes accessing at once then you're going to get a huge drop in performance on a spinning disk. That includes one writing process and
    one reading one, unless the reads all hit the cache.


    Kinda picking random reply. 

    I finally got the full backups done and have updated a couple times, new
    drive and old drives.  Someone mentioned atop and I gave it a try.  I
    noticed the drive parts that is either being read from or written to
    show up in red and a high amount of use.  After doing some google
    searching, red means really, really busy.  Makes sense.  So, the drives
    are apparently just maxing out. 

    I also noticed something else.  Given that my internet is so much faster
    now, that also puts a load on disk I/O.  Heck, the internet alone can
    almost max out the drive I/O.  On top of that I'm watching a video on my
    TV.  So, doing backups, watching TV and downloading stuff over a really
    fast internet connection, no wonder things were a little slow. 

    I also ran this on the new 10TB drive and a older SMR 8TB drive.  This
    is about normal, ish.  sdl is the 8TB and sdm is the 10TB. 


    root@fireball / # hdparm -tT /dev/sdl

    /dev/sdl:
     Timing cached reads:   8814 MB in  2.00 seconds = 4410.88 MB/sec
     Timing buffered disk reads: 558 MB in  3.00 seconds = 185.76 MB/sec root@fireball / # hdparm -tT /dev/sdm

    /dev/sdm:
     Timing cached reads:   8992 MB in  2.00 seconds = 4499.72 MB/sec
     Timing buffered disk reads: 612 MB in  3.01 seconds = 203.47 MB/sec root@fireball / #

    I have some other drives that are slower and a couple that are faster. 
    So, I guess it about averages out. 

    I have another question.  I notice that the drive activity light stays
    on a lot more, downloading/uploading faster etc etc.  Will that cause my drives to age faster or is that designed in?  I try to get the higher
    grade of drives, avoid those built for light duty stuff.  Of course,
    they not designed to be used by NASA either.  :/

    By the way, that new backup drive, filling up fast.  My storage
    partition is too.  This fast internet is causing, issues.  ROFL  Time to hunt up a deal on another 8TB or 10TB drive to add on.  Dang, my case is
    about full.  I really need a NAS or something.  :-D

    Dale

    :-)  :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From William Kenworthy@21:1/5 to Frank Steinmetzger on Thu Aug 25 08:30:01 2022
    On 25/8/22 06:45, Frank Steinmetzger wrote:
    [..]
    Also, if you're using ext2/3/4, there's the preset, i.e. if you're
    rather sure about what kind of data is going to be on there, you
    can tune it so that it reserves more or less place for metadata like
    inodes, which can be another bit.
    When I format a partition (and I usually use ext4, with some f2fs mingled in on flash bashed devices), I always set the inode count myself, because the default was always much too high. Like 15 m on a 40 GiB partition or so. My arch root partition has 2 m inodes in total, 34 % of which are in use for a full-fledged KDE setup. That’s sufficient.

    On Gentoo, I might give it some more for the ever-growing portage directory. But even a few percent on a 10 TB drive amount to many gigabytes.

    Keep in mind ext4 is created with a fixed number of inodes - you cant
    change it once its created so you have to deal with reformatting the
    filesystem and replacing the data.  Just another reason to use something
    more modern - running out of inodes, especially on a large disk is not a
    minor matter as you have to find somewhere to copy/store the data so you
    can reformat the disk with more inodes and then put it back.  I seem to remember the last time it happened to me (its not an uncommon event) I
    had to deal with mass corruption too.

    On the other hand, at one inode per file and Dale primarily storing
    large media files it may be safe to reduce them.

    BillK

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Freeman@21:1/5 to rdalek1967@gmail.com on Thu Aug 25 15:00:01 2022
    On Thu, Aug 25, 2022 at 8:43 AM Dale <rdalek1967@gmail.com> wrote:

    I've already got data on the drive now with the default settings so it
    is to late for the moment however, I expect to need to add drives
    later. Keep in mind, I use LVM which means I grow file systems quite
    often by adding drives. I don't know if that grows inodes or not. I
    suspect it does somehow.

    It does not. It just means that if you want to reformat it you have
    to reformat all the drives in the LVM logical volume. :)

    There are filesystems that don't have fixed limits on inodes on
    filesystem creation, but ALL filesystems have tradeoffs. This is one
    of the big limitations of ext4, but ext4 also has a number of
    advantages over alternatives. I tend to use zfs but it has its own
    issues. I don't believe inodes are fixed in zfs, but the last time I
    looked into it there were potential issues with reducing the size of a
    vdev. (I think that was being worked on but I'm not sure how stable
    that is, or if it is compatible with grub. Actually, one of my pet
    peeves has been that finding out exactly what zfs features are
    compatible with grub is tricky. Oh, you can google it, but I don't
    think there is any official page that is kept up to date.)

    --
    Rich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dale@21:1/5 to William Kenworthy on Thu Aug 25 14:50:01 2022
    William Kenworthy wrote:

    On 25/8/22 06:45, Frank Steinmetzger wrote:
    [..]
    Also, if you're using ext2/3/4, there's the preset, i.e. if you're
    rather sure about what kind of data is going to be on there, you
    can tune it so that it reserves more or less place for metadata like
    inodes, which can be another bit.
    When I format a partition (and I usually use ext4, with some f2fs
    mingled in
    on flash bashed devices), I always set the inode count myself,
    because the
    default was always much too high. Like 15 m on a 40 GiB partition or
    so. My
    arch root partition has 2 m inodes in total, 34 % of which are in use
    for a
    full-fledged KDE setup. That’s sufficient.

    On Gentoo, I might give it some more for the ever-growing portage
    directory.
    But even a few percent on a 10 TB drive amount to many gigabytes.

    Keep in mind ext4 is created with a fixed number of inodes - you cant
    change it once its created so you have to deal with reformatting the filesystem and replacing the data.  Just another reason to use
    something more modern - running out of inodes, especially on a large
    disk is not a minor matter as you have to find somewhere to copy/store
    the data so you can reformat the disk with more inodes and then put it back.  I seem to remember the last time it happened to me (its not an uncommon event) I had to deal with mass corruption too.

    On the other hand, at one inode per file and Dale primarily storing
    large media files it may be safe to reduce them.

    BillK

    I've already got data on the drive now with the default settings so it
    is to late for the moment however, I expect to need to add drives
    later.  Keep in mind, I use LVM which means I grow file systems quite
    often by adding drives.  I don't know if that grows inodes or not.  I
    suspect it does somehow.  This is my current inodes on drives inside my puter.  I removed the cruft from the list.


    root@fireball / # df -i
    Filesystem                       Inodes   IUsed     IFree IUse% Mounted on
    /dev/sda6                       1525920   18519   1507401    2% /
    /dev/mapper/OS-usr        2564096  752882   1811214   30% /usr /dev/sda1                        98392    1219     97173    2% /boot
    /dev/mapper/OS-var        3407872  322463   3085409   10% /var /dev/mapper/home-home--lv 183144448  727910 182416538    1% /home /dev/mapper/backup-backup  45793280 1359825  44433455    3% /backup /dev/mapper/crypt         488378368   43027 488335341    1% /home/dale/Desktop/Crypt
    root@fireball / #


    The portage tree is on /var on my system.  The ones I am most curious
    about is the /home and the crypt one.  As you can see, /home and crypt
    is using only a tiny fraction of inodes.  Here is the interesting bit:


    root@fireball / # df -h
    Filesystem                 Size  Used Avail Use% Mounted on /dev/sda6                   23G  2.2G   20G  10% / /dev/mapper/OS-usr          39G   22G   15G  61% /usr /dev/sda1                  373M  187M  167M  53% /boot /dev/mapper/OS-var          52G   23G   26G  47% /var /dev/mapper/home-home--lv  5.5T  2.6T  2.9T  48% /home /dev/mapper/backup-backup  688G  369G  319G  54% /backup /dev/mapper/crypt           15T   12T  3.1T  79% /home/dale/Desktop/Crypt
    root@fireball / #


    As you can see, /home is about half full, crypt however is pushing 80%
    pretty hard.  On /home, I have my documents directory and it has lots of smaller files compared to crypt.  While /home does have some videos, it
    also contains my camera picture directory and the directories for my
    trail cameras.  Also, it has small documents such as recipes and such
    which can be anywhere from a few kilobytes to maybe 1MB or so, not many
    much larger than that.  While I may not want to reduce /home much, I
    could likely reduce crypt by 90% and still have a lot left over,
    provided that changes when I grow the file system as I add drives etc. 
    Yes, I'm already on the hunt for another hard drive to add onto crypt. 

    Is there a tool to tell the average size of files in a directory?  Tools
    that would help us to know how many inodes one actually needs?  As it
    is, I'm doing a lot of updating of old files with larger files, due to
    higher resolution of videos.  Example, some videos are going from a
    little below 720p to 720p or 1080p.  The difference in file size is
    pretty large.  Sometimes double or more. 

    This is interesting to consider here.  One doesn't want to run out of
    inodes but at the same time, even if I only had 10% of the number I have
    now for crypt I'd still have 10 times more than I need with the thing
    almost full. This is also true for my backup drives as well.  Two of
    them at least.  One that has documents I'd likely leave as is. 

    I'm going to have to work on better storage somehow.  All of this is
    going to crop up again eventually, likely sooner rather than later. 

    Dale

    :-)  :-) 

    P. S.  I have to close my VPN to check emails still.  Pardon the time
    lag in replies compared to the past. 

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jack@21:1/5 to Rich Freeman on Thu Aug 25 17:20:01 2022
    On 8/25/22 08:52, Rich Freeman wrote:
    On Thu, Aug 25, 2022 at 8:43 AM Dale <rdalek1967@gmail.com> wrote:
    I've already got data on the drive now with the default settings so it
    is to late for the moment however, I expect to need to add drives
    later. Keep in mind, I use LVM which means I grow file systems quite
    often by adding drives. I don't know if that grows inodes or not. I
    suspect it does somehow.
    It does not. It just means that if you want to reformat it you have
    to reformat all the drives in the LVM logical volume. :)

    As I remember, if you enlarge a logical volume by adding a new physical
    volume, you then have to expand the filesystem to use that additional
    space.  Looking at resize2fs, it does increase the number of inodes, but
    only linearly in proportion to the amount of increased size.  I don't
    see any way to tell it to decrease, or even just not increase, the
    number of inodes.

    Related question - how much space would you actually save by decreasing
    the number of inodes by 90%?  Enough for one or two more videos?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Freeman@21:1/5 to rdalek1967@gmail.com on Thu Aug 25 23:10:01 2022
    On Thu, Aug 25, 2022 at 2:59 PM Dale <rdalek1967@gmail.com> wrote:

    While at it, can I move the drives on LVM to another system without
    having to copy anything? Just physically move the drives and LVM see
    them correctly on the new system?

    As long as we aren't talking about boot partitions/sectors, the answer
    is yes. As long as all the drives are attached and accessible by the
    kernel (necessary drivers/etc present), then LVM will put them
    together. It doesn't care where it finds them, as all the necessary
    metadata is stored on the drives and gets scanned.

    If you're talking about boot partitions/sectors then it isn't a huge
    problem, but you do need to ensure the bootloader can find the right drives/etc, as it isn't nearly as flexible and may need updates if the
    drives switch order/etc, at least for legacy bootloaders.

    --
    Rich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Wols Lists@21:1/5 to Dale on Fri Aug 26 00:50:01 2022
    On 25/08/2022 19:59, Dale wrote:
    While at it, can I move the drives on LVM to another system without
    having to copy anything?  Just physically move the drives and LVM see
    them correctly on the new system?  I may try to build a small computer
    for a NAS soon.  I'm not sure what is the least I can buy that will
    perform well.  I need to look into small mobos to see what options I
    have.  I mostly need a CPU to handle moving files, memory to pass it
    through and lots of SATA ports.  I figure a fast card for most SATA ports.

    https://raid.wiki.kernel.org/index.php/Linux_Raid

    That might be a good read ... I know I push it a bit, but it does go
    into disk management a decent bit.

    If you can think of any improvements, they'll be welcome! :-)

    Cheers,
    Wol

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Knecht@21:1/5 to rdalek1967@gmail.com on Fri Aug 26 02:20:01 2022
    On Thu, Aug 25, 2022 at 4:59 PM Dale <rdalek1967@gmail.com> wrote:
    <SNIP>
    I may do some mobo hunting shortly. See what little thing I can buy
    that is powerful enough. I don't think a Raspberry Pi is enough. It
    gets close tho. Biggest thing, I'd need a lot of SATA ports. LOTS of
    them.

    Granted, I had a couple of old cases which lowered the cost but
    I went to a local computer store and bought used motherboards
    that came with processors and memory. They were both Core i7
    but I paid only about $75 each. I needed power supplies and hard drives
    so each machine ended up around $350 or so by the time I was done.
    Each has 2 4TB drives for storage and a 1TB drive for the OS. A lot
    of used motherboards have on-board VGA and Gb/S networking.

    These are TrueNAS machines, FreeBSD not Linux, but they have
    a Linux version now if that makes you more comfortable.

    I'd stick with AMD64 as it's better tested and I don't think you'll
    get the network throughput you need to be fast with a Raspberry Pi

    <div dir="ltr"><br><br>On Thu, Aug 25, 2022 at 4:59 PM Dale &lt;<a href="mailto:rdalek1967@gmail.com">rdalek1967@gmail.com</a>&gt; wrote:<br>&lt;SNIP&gt;<br>&gt; I may do some mobo hunting shortly.  See what little thing I can buy<br>&gt; that is
    powerful enough.  I don&#39;t think a Raspberry Pi is enough.  It<br>&gt; gets close tho.  Biggest thing, I&#39;d need a lot of SATA ports.  LOTS of<br>&gt; them. <br><br><div>Granted, I had a couple of old cases which lowered the cost but</div><div>
    I went to a local computer store and bought used motherboards</div><div>that came with processors and memory. They were both Core i7</div><div>but I paid only about $75 each. I needed power supplies and hard drives</div><div>so each machine ended up
    around $350 or so by the time I was done.</div><div>Each has 2 4TB drives for storage and a 1TB drive for the OS. A lot</div><div>of used motherboards have on-board VGA and Gb/S networking.</div><div><br></div><div>These are TrueNAS machines, FreeBSD not
    Linux, but they have</div><div>a Linux version now if that makes you more comfortable. </div><div><br></div><div>I&#39;d stick with AMD64 as it&#39;s better tested and I don&#39;t think you&#39;ll</div><div>get the network throughput you need to be fast
    with a Raspberry Pi</div></div>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Wols Lists@21:1/5 to Dale on Fri Aug 26 09:30:01 2022
    On 26/08/2022 00:56, Dale wrote:
    Wols Lists wrote:
    On 25/08/2022 19:59, Dale wrote:
    While at it, can I move the drives on LVM to another system without
    having to copy anything?  Just physically move the drives and LVM see
    them correctly on the new system?  I may try to build a small computer
    for a NAS soon.  I'm not sure what is the least I can buy that will
    perform well.  I need to look into small mobos to see what options I
    have.  I mostly need a CPU to handle moving files, memory to pass it
    through and lots of SATA ports.  I figure a fast card for most SATA
    ports.

    https://raid.wiki.kernel.org/index.php/Linux_Raid

    That might be a good read ... I know I push it a bit, but it does go
    into disk management a decent bit.

    If you can think of any improvements, they'll be welcome! :-)

    It seems I've been to that link before, may even have it bookmarked, somewhere.  I'll give it another read tho.  After all, it has to be good
    or you wouldn't share it.  ;-)

    I think it's saved a lot of bacon over the years :-) Even if I've mostly
    edited it. I haven't written much of it from scratch.

    Cheers,
    Wol

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Haller@21:1/5 to Dale on Fri Aug 26 11:20:02 2022
    Hello,

    On Thu, 25 Aug 2022, Dale wrote:
    Jack wrote:
    [..]
    Related question - how much space would you actually save by
    decreasing the number of inodes by 90%? Enough for one or two more
    videos?

    Now I have to admit, that is a question I have too.

    From my tests with a swapfile (which matches what I remember from real
    FSen), I think '-T largefile' vs. default frees up around 1.6% of the
    capacity, so for 9.1T it'd be around 150G which might be worthwhile
    _iff_ you are sure about what kind of files will go on that FS.

    FWIW, in a pinch if you run out of Inodes, you can create an image
    file on that fs, taking just 1 Inode, format that image differently,
    loop-mount it and put ton's of files inside the image. It'll eat a bit
    of performance though.

    And about the average filesize in a dir: Just find out the size (e.g.:
    du -msx /foo
    ) and the number of used inodes (e.g.:
    find /foo -xdev | wc -l
    [1]) and then just divide:
    summed_size_in_unit / number_of_files = avg_size_in_unit
    for FSen, just divide used space by used inodes.

    HTH,
    -dnh

    [1] assuming you have no files with '\n' in the filename

    --
    printk (KERN_ERR "%s: Oops - your private data area is hosed!\n", ...)
    linux-2.6.6/drivers/net/ewrk3.c

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Gerrit Kuehn@21:1/5 to Dale on Fri Aug 26 14:00:01 2022
    On Fri, 26 Aug 2022 06:26:39 -0500
    Dale <rdalek1967@gmail.com> wrote:

    I looked at something called ITX but they have only one PCIe slot
    usually.  That's not enough.  I'd like to have two 6 or 8 port SATA cards.  Then balance the drives on each.  I think some of the through
    put is shared so the more drives on it, the slower it can be.  I'd
    like to have two such cards. 12 or 16 drives should be enough to last
    a while.  Part of me wants to do RAID but not sure about that.  Yet.
    I think I'm just going to go with ATX since it has several PCIe
    slots. 

    Usually, an ITX mainboard will feature a PCIe slot /and/ additional
    onboard SATA connectors. So you might be fine with an 8port controller
    card and the onboard connections.
    However, even if you want 16 SATA connections on one PCIe card, you can
    buy that. Broadcom SAS 9201-16i is one example. If that is enough bandwidth-wise will depend on your PCIe slot and the drives you're
    going to attach.
    I don't see any reason to do hardware raid these days, just a HBA and
    software raid (zfs or other solutions) should be fine.
    Everything just my 2¢ here, of course...


    cu
    Gerrit

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Freeman@21:1/5 to rdalek1967@gmail.com on Fri Aug 26 14:10:01 2022
    On Fri, Aug 26, 2022 at 7:26 AM Dale <rdalek1967@gmail.com> wrote:

    I looked into the Raspberry and the newest version, about $150 now, doesn't even have SATA ports.

    The Pi4 is definitely a step up from the previous versions in terms of
    IO, but it is still pretty limited. It has USB3 and gigabit, and they
    don't share a USB host or anything like that, so you should get close
    to full performance out of both. The CPU is of course pretty limited,
    as is RAM. Biggest benefit is the super-low power consumption, and
    that is something I take seriously as for a lot of cheap hardware that
    runs 24x7 the power cost rapidly exceeds the purchase price. I see
    people buying old servers for $100 or whatever and those things will
    often go through $100 worth of electricity in a few months.

    How many hard drives are you talking about? There are two general
    routes to go for something like this. The simplest and most
    traditional way is a NAS box of some kind, with RAID. The issue with
    these approaches is that you're limited by the number of hard drives
    you can run off of one host, and of course if anything other than a
    drive fails you're offline. The other approach is a distributed
    filesystem. That ramps up the learning curve quite a bit, but for
    something like media where IOPS doesn't matter it eliminates the need
    to try to cram a dozen hard drives into one host. Ceph can also do
    IOPS but you're talking 10GbE + NVMe and big bucks, and that is how
    modern server farms would do it.

    I'll describe the traditional route since I suspect that is where
    you're going to end up. If you only had 2-4 drives total you could
    probably get away with a Pi4 and USB3 drives, but if you want
    encryption or anything CPU-intensive you're probably going to
    bottleneck on the CPU. It would be fine if you're more concerned with
    capacity than storage.

    For more drives than that, or just to be more robust, then any
    standard amd64 build will be fine. Obviously a motherboard with lots
    of SATA ports will help here. However, that almost always is a
    bottleneck on consumer gear, and the typical solution to that for SATA
    is a host bus adapter. They're expensive new, but cheap on ebay (I've
    had them fail though, which is probably why companies tend to sell
    them while they're still working). They also use a ton of power -
    I've measured them using upwards of 60W - they're designed for servers
    where nobody seems to care. A typical HBA can provide 8-32 SATA
    ports, via mini-SAS breakout cables (one mini-SAS port can provide 4
    SATA ports). HBAs tend to use a lot of PCIe lanes - you don't
    necessarily need all of them if you only have a few drives and they're
    spinning disks, but it is probably easiest if you get a CPU with
    integrated graphics and use the 16x slot for the HBA. That or get a motherboard with two large slots (they usually aren't 16x, but getting
    4-8x slots on a consumer motherboard isn't super-common).

    For software I'd use mdadm plus LVM. ZFS or btrfs are your other
    options, and those can run on bare metal, but btrfs is immature and
    ZFS cannot be reshaped the way mdadm can, so there are tradeoffs. If
    you want to use your existing drives and don't have a backup to
    restore or want to do it live, then the easiest option there is to add
    one drive to the system to expand capacity. Put mdadm on that drive
    as a degraded raid1 or whatever, then put LVM on top, and migrate data
    from an existing disk live over to the new one, freeing up one or more
    existing drives. Then put mdadm on those and LVM and migrate more
    data onto them, and so on, until everything is running on top of
    mdadm. Of course you need to plan how you want the array to look and
    have enough drives that you get the desired level of redundancy. You
    can start with degraded arrays (which is no worse than what you have
    now), then when enough drives are freed up they can be added as pairs
    to fill it out.

    If you want to go the distributed storage route then CephFS is the
    canonical solution at this point but it is RAM-hungry so it tends to
    be expensive. It is also complex, but there are ansible playbooks and
    so on to manage that (though playbooks with 100+ plays in them make me nervous). For something simpler MooseFS or LizardFS are probably
    where I'd start. I'm running LizardFS but they've been on the edge of
    death for years upstream and MooseFS licensing is apparently better
    now, so I'd probably look at that first. I did a talk on lizardfs
    recently: https://www.youtube.com/watch?v=dbMRcVrdsQs

    --
    Rich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Knecht@21:1/5 to rdalek1967@gmail.com on Fri Aug 26 16:10:01 2022
    On Fri, Aug 26, 2022 at 4:27 AM Dale <rdalek1967@gmail.com> wrote:
    <SNIP>

    I looked into the Raspberry and the newest version, about $150 now,
    doesn't even have SATA ports. I can add a thing called a "hat" I think
    that adds a couple but thing is, that costs more and still isn't enough. I really don't like USB and hard drive mixing. Every time I do that, the
    hard drive turns into a door stop. Currently, I have three Rosewill
    external enclosures and they have USB and eSATA ports. I use the eSATA connections and no problems. It's also really fast. So, I plan to stick
    with SATA connections.

    You do NOT want the Rasp Pi for this. You would have to compile and
    maintain the OS yourself just adding work and the disk interfaces aren't
    high performance enough.

    Obviously you can do what you are most comfortable with but to me a NAS
    machine with a bunch of external drives does not sound very reliable.


    I have a old computer that I might could use. It is 4 core something and
    I think it has 4GBs of memory, maxed out. I think it will perform well
    enough but wish it had a little more horses in it.

    That's more than enough horsepower for TrueNAS Core. If the box will hold 3 drives then you have 1 system drive and 2 data drives for a ZFS RAID1.
    That's how both of my NAS boxes are set up.

    You can buy more memory at lots of places inexpensively but you don't need
    it to start. 4GB will work with TrueNAS Core. My machines have 8 & 12GB. I never use it all.

    https://www.truenas.com/truenas-core/

    Even if your old box has only 2 drives, download TrueNAS and just set it up
    on one systemdrive. It's not Gentoo difficult. It's a fully formed install system which will probably be running in an hour. You can use 1 drive in
    your data tank and add additional drives later.

    The speed of a NAS is _mostly_ a balance between network speed and disk
    speed. Processor usage for me is generally about 20%. If your network is GigaBit then you can sustain somewhere about 850Mb/S on the cables which translates nicely to about 100 MegaByte/S on your disk drives. There isn't
    that much CPU usage as it's mostly compression when backing up.

    Unless you use the box as a file server getting data back off is a once in
    a while event where you don't care too much about speed, or at least I
    don't.

    Just do it. Download the install disc and give it a try. Nothing much to
    lose.

    Good luck.
    Mark

    <div dir="ltr"><br><br>On Fri, Aug 26, 2022 at 4:27 AM Dale &lt;<a href="mailto:rdalek1967@gmail.com">rdalek1967@gmail.com</a>&gt; wrote:<br>&lt;SNIP&gt;<br>&gt;<br>&gt; I looked into the Raspberry and the newest version, about $150 now, doesn&#39;t even
    have SATA ports.  I can add a thing called a &quot;hat&quot; I think that adds a couple but thing is, that costs more and still isn&#39;t enough.  I really don&#39;t like USB and hard drive mixing.  Every time I do that, the hard drive turns into a
    door stop.  Currently, I have three Rosewill external enclosures and they have USB and eSATA ports.  I use the eSATA connections and no problems.  It&#39;s also really fast.  So, I plan to stick with SATA connections.<div><br></div><div>You do NOT
    want  the Rasp Pi for this. You would have to compile and maintain the OS yourself just adding work and the disk interfaces aren&#39;t high performance enough.</div><div><br></div><div>Obviously you can do what you are most comfortable with but to me a
    NAS machine with a bunch of external drives does not sound very reliable. </div><div><br>&gt;<br>&gt; I have a old computer that I might could use.  It is 4 core something and I think it has 4GBs of memory, maxed out.  I think it will perform well
    enough but wish it had a little more horses in it.</div><div><br></div><div>That&#39;s more than enough horsepower for TrueNAS Core. If the box will hold 3 drives then you have 1 system drive and 2 data drives for a ZFS RAID1. That&#39;s how both of my
    NAS boxes are set up.</div><div><br></div><div>You can buy more memory at lots of places inexpensively but you don&#39;t need it to start. 4GB will work with TrueNAS Core. My machines have 8 &amp; 12GB. I never use it all.</div><div><br></div><div><a
    href="https://www.truenas.com/truenas-core/">https://www.truenas.com/truenas-core/</a><br></div><div><br></div><div>Even if your old box has only 2 drives, download TrueNAS and just set it up on one systemdrive. It&#39;s not Gentoo difficult. It&#39;s a
    fully formed install system which will probably be running in an hour. You can use 1 drive in your data tank and add additional drives later.</div><div><br></div><div>The speed of a NAS is _mostly_ a balance between network speed and disk speed.
    Processor usage for me is generally about 20%. If your network is GigaBit then you can sustain somewhere about 850Mb/S on the cables which translates nicely to about 100 MegaByte/S on your disk drives. There isn&#39;t that much CPU usage as it&#39;s
    mostly compression when backing up.</div><div><br></div><div>Unless you use the box as a file server getting data back off is a once in a while event where you don&#39;t care too much about speed, or at least I don&#39;t. </div><div><br></div><div>Just
    do it. Download the install disc and give it a try. Nothing much to lose.</div><div><br></div><div>Good luck.</div><div>Mark</div></div>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Wols Lists@21:1/5 to Dale on Fri Aug 26 15:40:01 2022
    On 26/08/2022 12:27, Dale wrote:
    I think it's saved a lot of bacon over the years:-) Even if I've
    mostly edited it. I haven't written much of it from scratch.

    Cheers,
    Wol
    I see typos.  Do they matter to you?

    Apart from the one intentional one, I'd like to fix any others :-)

    Cheers,
    Wol

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Freeman@21:1/5 to markknecht@gmail.com on Fri Aug 26 16:30:01 2022
    On Fri, Aug 26, 2022 at 10:09 AM Mark Knecht <markknecht@gmail.com> wrote:

    Obviously you can do what you are most comfortable with but to me a NAS machine with a bunch of external drives does not sound very reliable.


    I would have thought the same, but messing around with LizardFS I've
    found that the USB3 hard drives never disconnect from their Pi4 hosts.
    I've had more issues with LSI HBAs dying. Of course I have host-level redundancy so if one Pi4 flakes out I can just reboot it with zero
    downtime - the master server is on an amd64 container. I only have
    about 2 drives per Pi right now as well - at this point I'd probably
    add more drives per host but I wanted to get out to 5-6 hosts first so
    that I get better performance especially during rebuilds. Gigabit
    networking is definitely a bottleneck, but with all the chunkservers
    on one switch they each get gigabit full duplex to all the others so
    rebuilds are still reasonably fast. To go with 10GbE you'd need
    hardware with better IO than a Pi4 I'd think, but the main bottleneck
    on the Pi4 I'm having is with encryption which hits the CPU. I am
    using dm-crypt for this which I think is hardware-optimized. I will
    say that zfs encryption is definitely not hardware-optimized and
    really gets CPU-bound, so I'm running zfs on top of dm-crypt. I
    should probably consider if dm-integrity makes more sense than zfs in
    this application.

    --
    Rich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Knecht@21:1/5 to rich0@gentoo.org on Fri Aug 26 16:50:01 2022
    On Fri, Aug 26, 2022 at 7:25 AM Rich Freeman <rich0@gentoo.org> wrote:

    On Fri, Aug 26, 2022 at 10:09 AM Mark Knecht <markknecht@gmail.com> wrote:

    Obviously you can do what you are most comfortable with but to me a NAS
    machine with a bunch of external drives does not sound very reliable.


    I would have thought the same, but messing around with LizardFS I've
    found that the USB3 hard drives never disconnect from their Pi4 hosts.
    I've had more issues with LSI HBAs dying. Of course I have host-level redundancy so if one Pi4 flakes out I can just reboot it with zero
    downtime - the master server is on an amd64 container. I only have
    about 2 drives per Pi right now as well - at this point I'd probably
    add more drives per host but I wanted to get out to 5-6 hosts first so
    that I get better performance especially during rebuilds. Gigabit
    networking is definitely a bottleneck, but with all the chunkservers
    on one switch they each get gigabit full duplex to all the others so
    rebuilds are still reasonably fast. To go with 10GbE you'd need
    hardware with better IO than a Pi4 I'd think, but the main bottleneck
    on the Pi4 I'm having is with encryption which hits the CPU. I am
    using dm-crypt for this which I think is hardware-optimized. I will
    say that zfs encryption is definitely not hardware-optimized and
    really gets CPU-bound, so I'm running zfs on top of dm-crypt. I
    should probably consider if dm-integrity makes more sense than zfs in
    this application.

    --
    Rich

    Quite interesting Rich. Thanks!

    My needs may be too 'simple'. I'm not overly worried about the government
    or foreign actors invading my world. (Even though I'm sure they could.) I
    just have a router-based firewall. My backup machines are powered down
    unless they are being used and they don't respond to wake-up over the
    network so they are safe enough for me. The one in my office backs up
    my two machines (desktop and video file server) and the second
    NAS backs up the first. They are both ZFS RAID1 using TrueNAS. I
    don't use encryption at all. A real dummy...

    But again, I'm not even a Gentoo user any more. I'm a KDE user
    and I could see no performance improvement using Gentoo over
    Kubuntu. My updates happen once a week, roughly, and never
    take more than 5 minutes. In 4 years I've never had an update
    fail. Kubuntu just works for me - but I'll be the first to admit I don't
    know what's running on my machine anymore so I'm not much better
    than being a Windows user in terms of control.

    In the old days (2001) I was a computer OS enthusiast. Today
    I play guitar, bake bread and drink a little wine. Life and focus
    changed. For a guy at home life is ok and I have backups to boot.

    <div dir="ltr"><br><br>On Fri, Aug 26, 2022 at 7:25 AM Rich Freeman &lt;<a href="mailto:rich0@gentoo.org">rich0@gentoo.org</a>&gt; wrote:<br>&gt;<br>&gt; On Fri, Aug 26, 2022 at 10:09 AM Mark Knecht &lt;<a href="mailto:markknecht@gmail.com">markknecht@
    gmail.com</a>&gt; wrote:<br>&gt; &gt;<br>&gt; &gt; Obviously you can do what you are most comfortable with but to me a NAS machine with a bunch of external drives does not sound very reliable.<br>&gt; &gt;<br>&gt;<br>&gt; I would have thought the same,
    but messing around with LizardFS I&#39;ve<br>&gt; found that the USB3 hard drives never disconnect from their Pi4 hosts.<br>&gt; I&#39;ve had more issues with LSI HBAs dying.  Of course I have host-level<br>&gt; redundancy so if one Pi4 flakes out I can
    just reboot it with zero<br>&gt; downtime - the master server is on an amd64 container.  I only have<br>&gt; about 2 drives per Pi right now as well - at this point I&#39;d probably<br>&gt; add more drives per host but I wanted to get out to 5-6 hosts
    first so<br>&gt; that I get better performance especially during rebuilds.  Gigabit<br>&gt; networking is definitely a bottleneck, but with all the chunkservers<br>&gt; on one switch they each get gigabit full duplex to all the others so<br>&gt;
    rebuilds are still reasonably fast.  To go with 10GbE you&#39;d need<br>&gt; hardware with better IO than a Pi4 I&#39;d think, but the main bottleneck<br>&gt; on the Pi4 I&#39;m having is with encryption which hits the CPU.  I am<br>&gt; using dm-crypt
    for this which I think is hardware-optimized.  I will<br>&gt; say that zfs encryption is definitely not hardware-optimized and<br>&gt; really gets CPU-bound, so I&#39;m running zfs on top of dm-crypt.  I<br>&gt; should probably consider if dm-integrity
    makes more sense than zfs in<br>&gt; this application.<br>&gt;<br>&gt; --<br>&gt; Rich<div><br></div><div>Quite interesting Rich. Thanks!</div><div><br></div><div>My needs may be too &#39;simple&#39;. I&#39;m not overly worried about the government </
    <div>or foreign actors invading my world. (Even though I&#39;m sure they could.) I</div><div>just have a router-based firewall. My backup machines are powered down</div><div>unless they are being used and they don&#39;t respond to wake-up over the</
    <div>network so they are safe enough for me. The one in my office backs up</div><div>my two machines (desktop and video file server) and the second</div><div>NAS backs up the first. They are both ZFS RAID1 using TrueNAS. I</div><div>don&#39;t use
    encryption at all. A real dummy...</div><div><br></div><div>But again, I&#39;m not even a Gentoo user any more. I&#39;m a KDE user</div><div>and I could see no performance improvement using Gentoo over</div><div>Kubuntu. My updates happen once a week,
    roughly, and never</div><div>take more than 5 minutes. In 4 years I&#39;ve never had an update</div><div>fail. Kubuntu just works for me - but I&#39;ll be the first to admit I don&#39;t</div><div>know what&#39;s running on my machine anymore so I&#39;m
    not much better</div><div>than being a Windows user in terms of control. </div><div><br></div><div>In the old days (2001) I was a computer OS enthusiast. Today </div><div>I play guitar, bake bread and drink a little wine. Life and focus</div><div>
    changed. For a guy at home life is ok and I have backups to boot.</div></div>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Knecht@21:1/5 to markknecht@gmail.com on Sun Aug 28 01:10:01 2022
    On Fri, Aug 26, 2022 at 4:37 PM Mark Knecht <markknecht@gmail.com> wrote:



    On Fri, Aug 26, 2022 at 4:21 PM Dale <rdalek1967@gmail.com> wrote:

    <SNIP>
    I have looked into OpenNAS and other NAS OS stuff. Some are on USB
    sticks and basically, you shut it down, upgrade the USB stick, insert it
    back into NAS and boot up.
    <SNIP>

    The first version of TrueNAS I used was on a USB stick and it worked fine
    so I'm fairly confident you'd be at least functional.

    One last thing for now - if you do buy a used MB do some research into
    whether it will actually boot from USB. One of the ones I bought actually
    did not do that so I had to dig up a old DVD drive to install from a CD.

    - M

    <div dir="ltr"><br><br>On Fri, Aug 26, 2022 at 4:37 PM Mark Knecht &lt;<a href="mailto:markknecht@gmail.com">markknecht@gmail.com</a>&gt; wrote:<br>&gt;<br>&gt;<br>&gt;<br>&gt; On Fri, Aug 26, 2022 at 4:21 PM Dale &lt;<a href="mailto:rdalek1967@gmail.com"
    rdalek1967@gmail.com</a>&gt; wrote:<br>&gt; &gt;<br>&gt; &lt;SNIP&gt;<br>&gt; &gt; I have looked into OpenNAS and other NAS OS stuff.  Some are on USB sticks and basically, you shut it down, upgrade the USB stick, insert it back into NAS and boot up.<
    &gt; &lt;SNIP&gt;<br>&gt;<br>&gt; The first version of TrueNAS I used was on a USB stick and it worked fine so I&#39;m fairly confident you&#39;d be at least functional.<br><div class="gmail_quote"><div><br></div><div>One last thing for now - if you
    do buy a used MB do some research into whether it will actually boot from USB. One of the ones I bought actually did not do that so I had to dig up a old DVD drive to install from a CD.</div><div><br></div><div>- M</div></div></div>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Knecht@21:1/5 to rdalek1967@gmail.com on Sun Aug 28 02:20:01 2022
    On Fri, Aug 26, 2022 at 4:21 PM Dale <rdalek1967@gmail.com> wrote:

    <SNIP>
    I have looked into OpenNAS and other NAS OS stuff. Some are on USB
    sticks and basically, you shut it down, upgrade the USB stick, insert it
    back into NAS and boot up.
    <SNIP>

    The first version of TrueNAS I used was on a USB stick and it worked fine
    so I'm fairly confident you'd be at least functional.

    One advantage of starting out this way was is that you can try multiple NAS systems on different flash drives without making a hard commitment.

    Keep in mind stuff like log files gets written to the OS drive whether it's
    a flash drive or not so long term it wasn't a solution I wanted to stick
    with long term. In my case I have about 30 old hard drives from old
    machines so I just found one and used it. In my case it was a 1TB WD Green drive circa 2012 that I purchased for a RAID but learned the hard way not
    to use. ;-)

    Whatever you do, have fun.

    Cheers,
    Mark

    <div dir="ltr"><br><br>On Fri, Aug 26, 2022 at 4:21 PM Dale &lt;<a href="mailto:rdalek1967@gmail.com">rdalek1967@gmail.com</a>&gt; wrote:<br>&gt;<br>&lt;SNIP&gt;<br>&gt; I have looked into OpenNAS and other NAS OS stuff.  Some are on USB sticks and
    basically, you shut it down, upgrade the USB stick, insert it back into NAS and boot up. <div>&lt;SNIP&gt;</div><div><br></div><div>The first version of TrueNAS I used was on a USB stick and it worked fine so I&#39;m fairly confident you&#39;d be at
    least functional.</div><div><br></div><div>One advantage of starting out this way was is that you can try multiple NAS systems on different flash drives without making a hard commitment.</div><div><br></div><div>Keep in mind stuff like log files gets
    written to the OS drive whether it&#39;s a flash drive or not so long term it wasn&#39;t a solution I wanted to stick with long term. In my case I have about 30 old hard drives from old machines so I just found one and used it. In my case it was a 1TB WD
    Green drive circa 2012 that I purchased for a RAID but learned the hard way not to use. ;-)</div><div><br></div><div>Whatever you do, have fun.</div><div><br></div><div>Cheers,</div><div>Mark</div></div>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael@21:1/5 to All on Sun Aug 28 10:27:50 2022
    On Sunday, 28 August 2022 00:30:08 BST Dale wrote:
    Mark Knecht wrote:
    On Fri, Aug 26, 2022 at 4:37 PM Mark Knecht <markknecht@gmail.com

    <mailto:markknecht@gmail.com>> wrote:
    On Fri, Aug 26, 2022 at 4:21 PM Dale <rdalek1967@gmail.com

    <mailto:rdalek1967@gmail.com>> wrote:
    <SNIP>

    I have looked into OpenNAS and other NAS OS stuff. Some are on

    USB sticks and basically, you shut it down, upgrade the USB stick,
    insert it back into NAS and boot up.

    <SNIP>

    The first version of TrueNAS I used was on a USB stick and it worked

    fine so I'm fairly confident you'd be at least functional.

    One last thing for now - if you do buy a used MB do some research into whether it will actually boot from USB. One of the ones I bought
    actually did not do that so I had to dig up a old DVD drive to install
    from a CD.

    - M

    I've got a older donated machine that doesn't boot from USB too. The
    newer donated machine does, I've booted from USB sticks before. I'm
    going to pull the side off and see how many drives it can hold and such
    in a bit. If I gather up enough steam. As it is, I only need three at
    the moment, four maybe later. Most come with six but this is a factory
    built machine. I can't recall what it comes with.

    Dale

    :-) :-)

    Depending on the age of the MoBo, even if it can't boot from USB, it should be able to boot with 'pixie'. Set up a tftp PXE server on your LAN and the old MoBo will fetch the image(s) to boot with. You can take a look here for
    ideas:

    https://wiki.gentoo.org/wiki/ Installation_alternatives#Diskless_install_using_PXE_from_the_LiveCD

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCAAdFiEEXqhvaVh2ERicA8Ceseqq9sKVZxkFAmMLNRYACgkQseqq9sKV ZxlSdxAAwH1tdeSW4ilC2f8r1iDpOX8wlzKFQpwBoNJl++WK17qzOCr+eqGpG32O HOjjfwX8OsGNARPUhc+iMhyp97tOgbihyEBhpJTUzdnVJmZsaNjT2hkqZRai8W8n GCwCKsxM655NZMmpvSRFEkkVdxytux6xPf6riySDlaJ1TwFLX8GO7H6LlFskha/9 tNrPfTZKAQ4HWNOfK4jXZHoG7+jeTgkqhHH8tcf6StcyeEF6U3YHx1xcp0Uqat+X JHux7SCpmiS+Fr7u1GlYZg0O7SvOvXsRisZ31eTMEGJiGSg0c1qwYt2kM7vVcAXq PNr75Pzt0qZnDG4kXaQp/itv183swD7sedB4tETXDX0duW5zBrtg1mOS29sSxlIn MLszrKZ2ldNzXlIv7MFqXWxIwdS+GaRPXSxbhErXuYW9VnmIWa7hHZSB9jayZx7/ mH1wwx32/d8Uw7/rVInpqfMVxUDlhQgB9k6Xrgq+YnXMybEXwnrNTdfFVqgKrT32 cb6cP68oMHNCiVukssQV1PJSNgpGNuP+eqTTUaCxUiVPObB0X5kUkPIcH81wb/Sf N3FpHzEM5m55tgLRtkNHwNm46/4lcfT0WCRzy2qXd8MAEKWXwvyzqRMPVxzXuk8m m2zW4gSz1hHBs/Y7pecnfRZa4T1K4AD4W1lpJf9O8M0E1EApAEs=
    =Zo16
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Steinmetzger@21:1/5 to All on Sun Aug 28 23:10:02 2022
    Am Fri, Aug 26, 2022 at 07:09:32AM -0700 schrieb Mark Knecht:
    On Fri, Aug 26, 2022 at 4:27 AM Dale <rdalek1967@gmail.com> wrote:
    <SNIP>

    I looked into the Raspberry and the newest version, about $150 now,
    doesn't even have SATA ports. I can add a thing called a "hat" I think
    that adds a couple but thing is, that costs more and still isn't enough. I really don't like USB and hard drive mixing. Every time I do that, the
    hard drive turns into a door stop. Currently, I have three Rosewill
    external enclosures and they have USB and eSATA ports. I use the eSATA connections and no problems. It's also really fast. So, I plan to stick with SATA connections.

    Is there a particular reason why your mailer inserts the quote character
    only on the first line of a quote paragraph? It makes reading your replies a little difficult because it is not visible on first glance where your quote ends and your reply starts.

    You do NOT want the Rasp Pi for this. You would have to compile and
    maintain the OS yourself just adding work and the disk interfaces aren't
    high performance enough.

    Why is that? My raspi runs on bog-standard Raspberry OS (i.e. Debian). I am also evalutating Arch on arm. Both don’t require any compilation or manual maintenance on my part. Just the regular updates via the package manager.

    The speed of a NAS is _mostly_ a balance between network speed and disk speed. Processor usage for me is generally about 20%. If your network is GigaBit then you can sustain somewhere about 850Mb/S on the cables which translates nicely to about 100 MegaByte/S on your disk drives.

    If the NAS is attached via gigabit only, I would bot concern myself with not saturating. Those 117 MB/s is nothing a drive can’t handle in most cases. (Especially if used in a RAID in whatever form).

    --
    Grüße | Greetings | Salut | Qapla’
    Please do not share anything from, with or about me on any social network.

    „Someone who defines a problem already solved half of it. “ – Julian Huxley

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmML2PgACgkQizG+tUDU MMpVSQ/+Ouhp8a4C6xRbHwLwQdQe4r7dKM5jSow2mTaOnwwQNGngv3vlfU1wq4XC t3BSIJDgNcPtKqpzjKYtUX3MBT6HYy4d2nXnwyv1GaDJoB4m7Gc1omy9T3HwJUj8 Sx9CarIbuu4O7ZWPX9b9upmjw0jf8tP9oTdT76nlCMHX0epHESxB6BiaGc1G8Y09 WoqgSar1WC8JjmSbW+GW+qFHW2Ma7/OLdxsqH5PKancOelvwi5ErDUEGwCPycXCy UX38C3lkp/u3X7wJjmMP1toQpDgl6OX3s7JUVQGdgPeBsj7zcl81MC7zfrMA+BFz 8yzt5XuA+Ga/xrybR5tF+sDPGra043wgQp+N3GDYpsjvfzQj2Op5uNo/OqdbpbDm ebPxuwrYJJMQPwOThG61F4wHvo1jERz4DYD4vzi3W90W2wErpm/S9AR4PIzngGQi XlldHEpi/f+vsI5cItkDMsSZYXsbwzqL/wR3Ugh/ZlauPcVZzmqQ8OvvH4q2w77A 4bVuRg1iTXswsTbf7Dtk+w8mXmrLbz94cEBsf2yts7am6H5aQgA3bMi5KqrBmATM KFIsQWdQWik0CLbkLE4rd3X4C8ZugA3EneDGiNys9f+hTmanWa95p26oYhVBeQzH XHD+72a/C8/+xU+9u3eKk2QhneqqVqUQ3mDTYzQAaezaKEuIvKE=
    =0d
  • From Frank Steinmetzger@21:1/5 to All on Sun Aug 28 23:40:01 2022
    Am Fri, Aug 26, 2022 at 06:26:39AM -0500 schrieb Dale:

    I looked into the Raspberry and the newest version, about $150 now,
    doesn't even have SATA ports.  I can add a thing called a "hat" I think
    that adds a couple but thing is, that costs more and still isn't
    enough.

    I run a raspi with some basic services, most importantly a pihole DNS filter and a PIM server. But I find it hacky-patchy with its flimsy USB power cable poking out of the side. I’d prefer a more sturdy construction, which is why
    I bought a NAS-style PC (zotac zbox nano with a passive 6 W Celeron). But
    that thing is so fast for every-day computing that I actually put a KDE
    system on it and now I don’t want to “downgrade” it to a mere server.

    I have a old computer that I might could use.  It is 4 core something
    and I think it has 4GBs of memory, maxed out.  I think it will perform
    well enough but wish it had a little more horses in it.

    An Intel Celeron from the Haswell generation (i.e. 8+ years old) did not
    have AES-NI yet, and it reached around 160 MB/s encryption speed. I tried
    it, because I had dealings with those processors in the past before I built
    my own NAS. Your old tech may still be usable, but please also consider
    power cost and its impact on the environment if it runs 24/7.

    I looked at something called ITX but they have only one PCIe slot
    usually.  That's not enough.  I'd like to have two 6 or 8 port SATA cards.  Then balance the drives on each.  I think some of the through
    put is shared so the more drives on it, the slower it can be.  I'd like
    to have two such cards. 12 or 16 drives should be enough to last a
    while.

    Part of me wants to do RAID but not sure about that.

    Dealing with so many drives, I think there’s no getting around RAID. All drives fail. The more drives you have, the earlier the first failure. With
    that many drives, I wouldn’t want to handle syncs between them by hand in order to get redundancy or backups of backups.

    While I don't think I need a super powerful machine, I do want enough
    that it will perform well.

    The question is: what do you need it to perform? If it’s just storing and serving files, save the bucks and use any low-end x86 processor with AES instructions. My NAS first ran on the above mentioned Celeron, but later I
    did upgrade to a low-power i3 (because the case¹ is very cramped, I don’t want too much heat in there). It is a dual-core with SMT and AES at 35 W.
    IIRC, it can encrypt around 800-something MB/s. And that is an old i3-4170. Modern chips are most probably much faster still.

    I may use actual NAS software too.

    What is “actual NAS software”? Do you mean a NAS distribution? From my understanding, those distros install the usual services (samba, ftp, etc.)
    and develop a nice web frontend for it. But since those are web
    applications, there isn’t much to be gained from march=native.

    I still run Gentoo on my NAS, just for the old habit and because it comes
    with ZFS right out of the box. But the services I still configure the
    classical way – ssh, vim and config files.

      I'm sure Gentoo would work to with proper tweaking but then I need to
    deal with compiling things.  Of course, no libreoffice or anything big so
    it may not be to bad.  Thing is, the NAS software will likely be more efficient since it is designed for the purpose. 

    More efficient than what?

    My NAS is powered up every few weeks or often months. And then the first
    thing I do is—of cours—a world update. And as you mentioned, the install base is rather small. No graphical stuff whatsoever (server board, small ASMedia VGA chip on-board, no Intel graphics). The biggest pkgs are gcc
    (around 2 hours build time) and llvm. The rest is user land stuff that helps
    me in dealing with the media files the NAS serves. Mkvtoolnix is a compile
    hog at around half an hour.

    I just know I need a proper machine for the task.  I'm getting lots of
    data fast now.  I hit the 80% mark overnight.  At 90%, I consider it critical.  Something must be done soon. 

    How about watching the spoils for a change instead of only ever downloading
    it? ;-)


    ¹ https://www.inter-tech.de/en/products/ipc/storage-cases/sc-4100
    --
    Grüße | Greetings | Salut | Qapla’
    Please do not share anything from, with or about me on any social network.

    Everything has its two sides. But a quadrangle has three.

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmML32IACgkQizG+tUDU MMo02w/+LE8W2Gnqsr/zVQYYgIVXlRH18H0xdYxitwdWqznWUsNQAaZB/jrve+q9 tkToUr7YzRAK8tKQVQJZ5bqa1D3SftduudH3yyqauFyDZp2tEc54+Oj+F8Jj7gwX sShhITxJEn2QUFN27NzI70k9+inniK9pr/cMzSAu/kQAAxpaSzopAOp3bsC+0iVP b8j0+gGkYb01qU8QTrqJbrOS+Z+m05NWvl/IJBNfGG3Zc9Fd/rtiBtUsd4n7PHjO 1BhbbnGF4nKONha5sfaBxVh4dwV9X+6TxkAvUMu/uqaP1rmoYY/1NpHaRauUFBm/ pMDI/BDaeMWmmNBw5LbA/jc/8Jkzdcfld3bbawoU1yyJ29FEaYWGFufZpahXav7J jFwRnL7PPBPvBORvEhGmVEnnVRD2xP9sS/bq49Csa1/cGTOIcJiYwHJ6Y3x2fsDk Urf8T/7K0nRWoPJwY7PDta5PkWgt4XXznp/oC9FkXua/RzWazT+7PSogX/2V2E0u pk6E/mCJPxRsxO7OS231MjkcapUVL8IrBBHLLkRNIU0QK/0Qb4Ak2Xq6eyCSE/iM NkTHa/6RZW5s/Gx6nUd6+Bmp7qhTmnfOFoi6zOO+1QoNuZwE/7rL/WZLd1uClVO6 8HsYDdOeJFoRe55G723Fyco8Lxpw64EAkTHELWKLUTk6wzuYzLA=
    =aknU
    -----END PGP SIGNATU
  • From Wol@21:1/5 to Frank Steinmetzger on Sun Aug 28 23:40:02 2022
    On 28/08/2022 22:07, Frank Steinmetzger wrote:
    Is there a particular reason why your mailer inserts the quote character
    only on the first line of a quote paragraph? It makes reading your replies a little difficult because it is not visible on first glance where your quote ends and your reply starts.

    Because the OPs mailer sent it as one line per paragraph?

    My mailer (Tbird) is configured for plain text, but still screws up when
    it receives html junk.

    Cheers,
    Wol

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Knecht@21:1/5 to antlists@youngman.org.uk on Mon Aug 29 00:00:01 2022
    On Sun, Aug 28, 2022 at 2:33 PM Wol <antlists@youngman.org.uk> wrote:

    On 28/08/2022 22:07, Frank Steinmetzger wrote:
    Is there a particular reason why your mailer inserts the quote character only on the first line of a quote paragraph? It makes reading your
    replies a
    little difficult because it is not visible on first glance where your
    quote
    ends and your reply starts.

    Because the OPs mailer sent it as one line per paragraph?

    My mailer (Tbird) is configured for plain text, but still screws up when
    it receives html junk.

    Cheers,
    Wol


    My address is GMail, and I just use Chrome. Responding to this list
    requires me to Ctrl-A and then remove ALL formatting which inserts
    the greater than symbol.

    For 99% of my life no one cares about how an email response is
    formatted. The only place that complains is this list so I do all of
    that above to try to make it better for the list.

    Maybe I made a mistake on the recent response that Frank
    doesn't like? If it's happening on every email I send then what
    Frank is seeing is not what I'm seeing.

    In my experience it's easier to ride the Google horse in the
    direction the Google horse is going. Turning off all formatting
    causes too many problems in real life outside of Gentoo
    email lists.

    My apologies,
    Mark

    <div dir="ltr"><br><br>On Sun, Aug 28, 2022 at 2:33 PM Wol &lt;<a href="mailto:antlists@youngman.org.uk">antlists@youngman.org.uk</a>&gt; wrote:<br>&gt;<br>&gt; On 28/08/2022 22:07, Frank Steinmetzger wrote:<br>&gt; &gt; Is there a particular reason why
    your mailer inserts the quote character<br>&gt; &gt; only on the first line of a quote paragraph? It makes reading your replies a<br>&gt; &gt; little difficult because it is not visible on first glance where your quote<br>&gt; &gt; ends and your reply
    starts.<br>&gt;<br>&gt; Because the OPs mailer sent it as one line per paragraph?<br>&gt;<br>&gt; My mailer (Tbird) is configured for plain text, but still screws up when<br>&gt; it receives html junk.<br>&gt;<br>&gt; Cheers,<br>&gt; Wol<div><br></div><
    <br></div><div>My address is GMail, and I just use Chrome. Responding to this list</div><div>requires me to Ctrl-A and then remove ALL formatting which inserts </div><div>the greater than symbol.</div><div><br></div><div>For 99% of my life no one
    cares about how an email response is </div><div>formatted. The only place that complains is this list so I do all of</div><div>that above to try to make it better for the list. </div><div><br></div><div>Maybe I made a mistake on the recent response
    that Frank </div><div>doesn&#39;t like? If it&#39;s happening on every email I send then what </div><div>Frank is seeing is not what I&#39;m seeing. </div><div><br></div><div>In my experience it&#39;s easier to ride the Google horse in the</div><div>
    direction the Google horse is going. Turning off all formatting</div><div>causes too many problems in real life outside of Gentoo</div><div>email lists.</div><div><br></div><div>My apologies,</div><div>Mark</div><div><br></div><div><br></div><div><br></
    </div>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Wol@21:1/5 to Mark Knecht on Mon Aug 29 01:40:01 2022
    On 28/08/2022 22:53, Mark Knecht wrote:


    On Sun, Aug 28, 2022 at 2:33 PM Wol <antlists@youngman.org.uk <mailto:antlists@youngman.org.uk>> wrote:

    On 28/08/2022 22:07, Frank Steinmetzger wrote:
    Is there a particular reason why your mailer inserts the quote
    character
    only on the first line of a quote paragraph? It makes reading your
    replies a
    little difficult because it is not visible on first glance where
    your quote
    ends and your reply starts.

    Because the OPs mailer sent it as one line per paragraph?

    My mailer (Tbird) is configured for plain text, but still screws up when it receives html junk.

    Cheers,
    Wol


    My address is GMail, and I just use Chrome. Responding to this list
    requires me to Ctrl-A and then remove ALL formatting which inserts
    the greater than symbol.

    For 99% of my life no one cares about how an email response is
    formatted. The only place that complains is this list so I do all of
    that above to try to make it better for the list.

    Well, a lot of us are greybeards who predate HTML. Grumpy old greybeards
    don't like change :-) and I include myself in that.

    Maybe I made a mistake on the recent response that Frank
    doesn't like? If it's happening on every email I send then what
    Frank is seeing is not what I'm seeing.

    In my experience it's easier to ride the Google horse in the
    direction the Google horse is going. Turning off all formatting
    causes too many problems in real life outside of Gentoo
    email lists.

    HTML is f*cked up for so many things (for which clueless idiots
    recommend it). At work, living inside the GMail/Chrome eco-system is
    mandatory. And I regularly swear at it because I get over-wide emails
    that are a bastard to read.

    My apologies,
    Mark

    Don't apologise - it's not your fault. If you use gmail/chrome, that's
    fine. It would just be nice if Google didn't screw up your mail. The
    really scary thing is all these things Google do behind your back are
    capable of (and as far as the linux guys are concerned regularly do)
    screwing up your security.

    If you want to interact with linux devs you will find they are even more anti-gmail than here - because what you are sending is not what you
    think you are sending! But until then, don't beat yourself up over it.
    Don't upset people unnecessarily, but if it causes you grief, don't do
    it. If people don't like it, they can ignore it.

    Cheers,
    Wol

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Steinmetzger@21:1/5 to All on Mon Aug 29 16:50:02 2022
    Am Mon, Aug 29, 2022 at 12:49:56AM -0500 schrieb Dale:

    I run a raspi with some basic services, most importantly a pihole DNS filter
    and a PIM server. But I find it hacky-patchy with its flimsy USB power cable
    poking out of the side. I’d prefer a more sturdy construction, which is why
    I bought a NAS-style PC (zotac zbox nano with a passive 6 W Celeron). But that thing is so fast for every-day computing that I actually put a KDE system on it and now I don’t want to “downgrade” it to a mere server.

    I googled that little guy and that is a pretty neat little machine.  Basically it is a tiny puter but really tiny, just not tiny on
    features.  The Zotac systems, even some older ones, are pretty nifty.  I think I read they have a ITX mobo which is really compact.

    ITX (or rather miniITX) is 17×17 cm: https://en.wikipedia.org/wiki/Mini-ITX Those NUC-types are much smaller. I don’t quite know whether that board form factor has a name of its own (aside from NUC, but that’s a marketing name from Intel).

    It sort of reminds me of a cell phone.  Small but fast CPUs, some even
    have decent amounts of ram so they can handle quite a lot.  Never heard of this thing before.  I wouldn't mind having one of those to work as my OpenVPN server thingy.  I'd just need to find one that has 2 ethernet
    ports and designed for that sort of task. 

    Many of the ZBoxes have dual NICs, which is what makes them very popular
    among server and firewall hackers because they are also very frugal. My particular model is the CI331: https://www.zotac.com/us/product/mini_pcs/zbox-ci331-nano-barebone
    It has one 2,5″ slot and one undocumented SATA M.2 which can only be reached by breaking the warranty seal. That’s where zotac installs a drive if you
    buy a zbox with Winblows pre-installed.

    After updating the BIOS, which allowed the CPU to enter lower C states, it draws 6 W on idle. It’s not a record, but still not so much for a 24/7 x86 system.

    I have a old computer that I might could use.  It is 4 core something
    and I think it has 4GBs of memory, maxed out.  I think it will perform
    well enough but wish it had a little more horses in it.
    An Intel Celeron from the Haswell generation (i.e. 8+ years old) did not have AES-NI yet, and it reached around 160 MB/s encryption speed. I tried it, because I had dealings with those processors in the past before I built my own NAS. Your old tech may still be usable, but please also consider power cost and its impact on the environment if it runs 24/7.

    I'm not real sure what that old machine has.  I have Linux, can't recall
    the distro tho, on it.  Is there a way to find out if it supports the
    needed things?

    cat /proc/cpuinfo and look for aes or the like. Or enter the processor name into wikipedia, which will redirect you to the “List of processors by <Manufacturer>” with huge tables of comparision and general info on an architecture’s improvements over its predecessor, like AES.

    I may use actual NAS software too.
    What is “actual NAS software”? Do you mean a NAS distribution? From my understanding, those distros install the usual services (samba, ftp, etc.) and develop a nice web frontend for it. But since those are web applications, there isn’t much to be gained from march=native.

    I've seen TrueNAS, OpenNas I think and others.  Plus some just use
    Ubuntu or something.  Honestly, almost any linux distro with no or a
    minimal GUI would work. 

    OK, but then you don’t run those on Gentoo. And those NAS distros are so small and light-weight, they can be run from a USB stick if you so choose.
    My NAS’s mainboard has a USB-A socket on-board for that reason.

      I'm sure Gentoo would work to with proper tweaking but then I need to >> deal with compiling things.  Of course, no libreoffice or anything big so >> it may not be to bad.  Thing is, the NAS software will likely be more
    efficient since it is designed for the purpose. 
    More efficient than what?

    I figure something like OpenNAS or TrueNAS would work better as it is
    built to be user friendly and has tools by default to manage things. 

    Yeah, I was thinking of using one of those, too. But I liked the idea of
    being more flexible with some ZFS voodoo which the web interfaces won’t allow. Like creating a downgraded pool because I don’t have enough HDDs, filling
    that up and adding the missing disk later. Sometimes I wish for the bigger
    ease of use of a web interface.

    I'm pretty sure they support RAID and such by default.  It is likely set
    up to make setting it up easier too. 

    They do, naturally. And yes, the frontends hide lots of the gory details.

    --
    Grüße | Greetings | Salut | Qapla’
    Please do not share anything from, with or about me on any social network.

    Even baldies do have streaks of luck.

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmMM0GIACgkQizG+tUDU MMoC2Q/+O2a1P91tUDK5lJm1+K9pKmOC/XgAKiHELQGN2jKoyr2SiStexBYbo5NG Uls6HkfND0/P9bDAELaol9msYcXWS6vwCUs+awqtDzO1n5Of8ou0xHWZPZRClmRQ HBXl+L5QXepHlVZm4d9H8gsNCtQPCNObE7EcF4Z7T5nBXspVQ//3ik5z+rkaA1MI 5XsFXgKMK82uqwISkKlyoQWueWhyoyuS9qV/+NVTmAIhECzReubVAR6s5T404Wne SNx/r3ga36hI7VrGNdKyFU3/8+5S9dYGOEGA5flEktZNKv2lm/AnjEJwvTcf3yn4 A+slIfCwKfNVIipG6aQSesZarDRxtO+bFL5fQA/v/zQLb2ovFSU+H0ELaCClgudU FcEhOaAW4hePcGN/SgepJwfYLG+SfvzP2OtYL6JcZFu9YbK8FT8F2qNZldsMLbha EtILVT68H3XjDAjgUIZij4+WMbWyubRf1Fe0f5L5twqkjnkbI7AUezG/YrjxcND5 QY587XycwQhMCXkDlKw+UGlkSgOMM+Ptyfq7eDDhwZzCVNf3CHeeIfJiqEh9KPA6 HQSjkPRCZxcOVtBdrPOqj0/mGMvkF5bol4Pex3X/xwnXnTvgt1OS7DnK65DVV1QO l6Fsswg0ihQqFji0IOj/x8JFMr6N32e2PmZC36y6BkPm9s1Dnfc=
    =KKO1
    -----END PGP SIGNATURE-----

    --- SoupG
  • From Frank Steinmetzger@21:1/5 to All on Tue Aug 30 16:30:01 2022
    Am Mon, Aug 29, 2022 at 04:28:55PM -0500 schrieb Dale:

    It sort of reminds me of a cell phone.  Small but fast CPUs, some even
    have decent amounts of ram so they can handle quite a lot.  Never heard of
    this thing before.  I wouldn't mind having one of those to work as my
    OpenVPN server thingy.  I'd just need to find one that has 2 ethernet
    ports and designed for that sort of task. 
    Many of the ZBoxes have dual NICs, which is what makes them very popular among server and firewall hackers because they are also very frugal. My particular model is the CI331: https://www.zotac.com/us/product/mini_pcs/zbox-ci331-nano-barebone
    It has one 2,5″ slot and one undocumented SATA M.2 which can only be reached
    by breaking the warranty seal. That’s where zotac installs a drive if you buy a zbox with Winblows pre-installed.

    After updating the BIOS, which allowed the CPU to enter lower C states, it draws 6 W on idle. It’s not a record, but still not so much for a 24/7 x86
    system.

    I was looking for one with two ethernet ports but wasn't having any luck yet.  I did find and download like a catalog thing but it will take a
    while to dig through it.  They have a lot of models for different
    purposes.

    Here’s a list of barebone systems with dual-nics: https://skinflint.co.uk/?cat=barepc&xf=19071_2
    You can narrow down your criteria in much detail, such as passively cooled¹, CPU vendor and features (hello, AES) or even if it’s officially suited for conutinuous operation by the manufacturer. Obviously, mini barebones are not suited for big NAS duty due to their form factor.

    I mentioned this site before. But even though it’s EU centric, many products are available worldwide (or in regional variants). Others on the list chimed
    in and named more sites, but I can’t remember them.

    I did see a pre-made thing on ebay but can't recall the brand that cost hundreds that was made just for VPNs and such.

    VPN appliances are pricey due to their industrial design. But for a normal
    dude like we are, a consumer-grade device might be better suited. Especially
    if it can be used for other purposes such as media source for the TV.

    It was really pricey tho.  But, you plug it in, boot it up and it had evrything installed and then some to control networks traffic.  It had
    stuff I never heard of. 

    Industrial stuff, as I said. And you pay for the bespoke software, without which the appliance probably won’t work.

    I have a old computer that I might could use.  It is 4 core something >>>> and I think it has 4GBs of memory, maxed out.  I think it will perform >>>> well enough but wish it had a little more horses in it.
    I'm not real sure what that old machine has.  I have Linux, can't recall >> the distro tho, on it.  Is there a way to find out if it supports the
    needed things?
    cat /proc/cpuinfo and look for aes or the like.

    I have booted that old thing up and I grepped cpuinfo and no AES that I
    could see or grep could find.  Must be before it's time. 

    While I had it booted up, I checked into what all it did have.  It only
    has 4 SATA ports, one already used for the OS hard drive.  I could
    likely run it from a USB stick which would make all 4 available.  It has 8GBs of memory too.  CPU is a AMD Phenom 9750 Quad running at 2.4GHz.  I found it add that cpuinfo showed a different speed I think.

    cpuinfo shows the current frequency, not the maximum.

    It's not a speedster or anything but I may can do something with it.

    According to https://en.wikipedia.org/wiki/List_of_AMD_Phenom_processors the 9750 Quad is a 95 W or 125 W processor. Going by https://www.cpubenchmark.net/cpu.php?cpu=AMD+Phenom+9750+Quad-Core&id=306
    its single-thread power is ca. ⅔ that of the Celeron N5100 on my ZBox (at 6 W):
    https://www.cpubenchmark.net/cpu.php?cpu=Intel+Celeron+N5100+%40+1.10GHz&id=4331

    I'm pretty sure they support RAID and such by default.  It is likely set >> up to make setting it up easier too. 
    They do, naturally. And yes, the frontends hide lots of the gory details.

    That's my thinking since RAID, ZFS and such are new to me.  Of course,
    front ends do take away a lot of fine controls too, usually. 

    Setting up ZFS is—from a technical POV—not that much different from LVM, which you are familiar with. You have block devices over which you create a virtual device (vdev). A vdev can be a single disk, or a mirror of disks, or
    a parity RAID. A storage pool is then created over one or more vdevs. And in that pool you can create several ZFS (or just the one that is created with
    the pool itself).

    ┌POOL───────────────────────────┐
    │┌VDEV 1────┐┌─VDEV 2────────┐ ┌┴ZFS────┐
    ││ mirror ││ parity RAID │ │ /pool │ ││┌───┐┌───┐││┌───┐┌───┐┌───┐│ ├─ZFS────┴─────┐
    │││sda││sdb││││sdc││sdd││sde││ │ /pool/video │
    ││└───┘└───┘││└───┘└───┘└───┘│ └┬─────────────┘
    │└──────────┘└───────────────┘ │
    └───────────────────────────────┘

    In comparison:
    LVM: block device/partition → physical volume → volume group → logical volume → any file system
    ZFS: block device/partition → vdev → pool → ZFS filesystem

    The beauty is that ZFS can take care of everything. You just give it whole block devices and at the other end you get a mountable file system. What you also get is protection from bitrot thanks to in-FS checksumming. You don’t get that with rsync on ext4. That’s why eventually I decided for ZFS for my NAS over other, perhaps more practical solutions like LVM on mdraid. When it checks the pool’s integrity, it is faster than mdraid, because it knows
    where actual data is stored, so it can skip empty parts.

    The biggest disadvantage over LVM is that it’s rather limited regarding adding or removing disks. You cannot simply add a disk to a parity VDEV,
    only to a mirror (which only increases redundancy, not capacity). And once a new vdev is added to a pool, you cannot remove it, only replace its disks. People added a single disk to a pool by accident and had to rebuild the
    entire thing as a result. (Though I think that particular problem has been dealt with recently.)

    There exist of course some technical pitfalls. The ashift parameter
    determines how big the smallest block of data is and should not be smaller
    than the HDD’s block size. Hence, ashift=12 (2^12=4096) is the minimum one should use these days. But I think it’s become the default anyways. Another is the record size, which is the logical data block size for striping, IIRC. For bigger files like video, it’s more efficient to use a bigger block size (say, 1 MiB) than a smaller like 64 k, because it improves the ratio of metadata over payload.


    ¹ Regarding passive cooling: I like it, because fans break and make noise.
    But running Gentoo on a tiny box with a small heatsink may put too much
    stress on the electronics. When I tested out the performance of the ZBox, I played Warzone 2100—a 3D realtime strategy game—on it on my 2560×1440 screen
    (fluently, I might add). But afterwards the whole case was HOT.

    --
    Grüße | Greetings | Salut | Qapla’
    Please do not share anything from, with or about me on any social network.

    Save water! Dilute it!

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmMOHg8ACgkQizG+tUDU MMrHkA//brQ+IhNLzeU4vVc5+PWE1cwDHnQ/vUdc5VX094KV1hQWA5b4cz5iAIBD UNCIaqGAU39iKWcz8prene3GFB5GTweC18Y1x3hcOW5e2n8aNJj7WZqRF/mzZq4M 5vcGoNnZR4/+7Y8vL97hHxjdwbnY5RAQxEWLHTWbGXSY/1wu/my+82tFp38T+ygc tMF5S/gcISpbOWVIIt511BCA56Ma5kNBERslowYf134qiSfbxESWicAVdBIrafYK MMOvd+Fen+vidUCpp8T/8Up1nWt/YRf2yX0/P0lt+OEnGNvVpWGGO28pX3EIpfId Xpz2GrX83xX0rBlg4kJ3q5FjfLqLRrqG7IJ+pJ3NKgk3shxQve0yBjeVzEGNNU1C 7YBZh79trWed7VSqGLE//u9H5GziijkqLKPntxvni6A49vuGN0jUQDJ75YEs9p2F 8P6nPuO2VAKeTigR5xYlvnYF6mBEV6htpG+ll8J+OXfPX6rVPPuutWuFsc0JvVnV V5ZVZNtxWpYao65q6Av3H2Kh/cttMZfmqQ7dWoFW/emtGgnDFuBbCZMSVrNivXT5 AQiugn4j/VpsrV62Jp0Z6Z9vSnMKCLWxTh7lzAUoG+9/rAboY9/jLN0Un/g99B1k q3tq8iDbkmGQsJMAPywpEAbx+FdUC8Kcf3fa7+u66VaauJM8INE=
    =Q4L7
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05