• Re: [gentoo-user] Long boot time after kernel update

    From Wols Lists@21:1/5 to Jacques Montier on Mon Dec 27 01:50:02 2021
    On 26/12/2021 18:50, Jacques Montier wrote:
    Hello all,

    I update to the last stable kernel 5.15.11-gentoo with the same configuration as the old kernel and now, the boot time is quite long.

    Test :
    5.10.76-gentoo-r1 kernel : boot time 30s
    5.15.11-gentoo kernel : boot time 70s

    My setup (non EFI) :
    - SSD 250 Go :  /dev/sdd1 ext2 for boot and /dev/sdd2 ext4 for /
    - SSD 250 Go /dev/sdc1 ext4 for home
    - Two 2T sata disks Seagate BarraCuda 3.5 /dev/sda1 ext4 for data and /dev/sdb1 ext4 for data backup (Not Raid)

    With the new kernel, the two Seagate disks seem to make the boot time
    quite longer.

    Test :
    booting without mounting the disks : 20s
    booting with mounting only one disk : 25s
    booting with both disks : more than 60s

    Testing the disks :
    - smartctl -s on -a /dev/sda and smartctl -s on -a /dev/sdb : No error reported.
    - fsck -a /dev/sda1 and fsck -a /dev/sdb1 : clean
    - e2fsck -cfpv /dev/sda1 : clean

    Nevertheless, dmesg shows a lot of errors (attached image) with the new kernel.
    Those errors do not appear with 5.10.76-gentoo-r1 kernel.

    I'm rather confused...
    Have you any idea ?

    What does fdisk print say? Are your partitions mis-aligned?

    Unlikely, but it depends how long ago they were partitioned. There's all
    this stuff about switching from 512B to 4K sectors and that *could* be
    the problem.

    Cheers,
    Wol

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jacques Montier@21:1/5 to All on Mon Dec 27 12:10:01 2021
    Le lun. 27 déc. 2021 à 01:44, Wols Lists <antlists@youngman.org.uk> a
    écrit :

    On 26/12/2021 18:50, Jacques Montier wrote:
    Hello all,

    I update to the last stable kernel 5.15.11-gentoo with the same configuration as the old kernel and now, the boot time is quite long.

    Test :
    5.10.76-gentoo-r1 kernel : boot time 30s
    5.15.11-gentoo kernel : boot time 70s

    My setup (non EFI) :
    - SSD 250 Go : /dev/sdd1 ext2 for boot and /dev/sdd2 ext4 for /
    - SSD 250 Go /dev/sdc1 ext4 for home
    - Two 2T sata disks Seagate BarraCuda 3.5 /dev/sda1 ext4 for data and /dev/sdb1 ext4 for data backup (Not Raid)

    With the new kernel, the two Seagate disks seem to make the boot time
    quite longer.

    Test :
    booting without mounting the disks : 20s
    booting with mounting only one disk : 25s
    booting with both disks : more than 60s

    Testing the disks :
    - smartctl -s on -a /dev/sda and smartctl -s on -a /dev/sdb : No error reported.
    - fsck -a /dev/sda1 and fsck -a /dev/sdb1 : clean
    - e2fsck -cfpv /dev/sda1 : clean

    Nevertheless, dmesg shows a lot of errors (attached image) with the new kernel.
    Those errors do not appear with 5.10.76-gentoo-r1 kernel.

    I'm rather confused...
    Have you any idea ?

    What does fdisk print say? Are your partitions mis-aligned?

    Unlikely, but it depends how long ago they were partitioned. There's all
    this stuff about switching from 512B to 4K sectors and that *could* be
    the problem.

    Cheers,
    Wol


    Thanks Wol,

    The output of fdisk for my two 2T disks :

    ...........................
    Disque /dev/sda : 1,82 TiB, 2000398934016 octets, 3907029168 secteurs
    Modèle de disque : ST2000DM008-2FR1
    Unités : secteur de 1 × 512 = 512 octets
    Taille de secteur (logique / physique) : 512 octets / 4096 octets
    taille d'E/S (minimale / optimale) : 4096 octets / 4096 octets
    Type d'étiquette de disque : dos
    Identifiant de disque : 0xe0e26837

    Périphérique Amorçage Début Fin Secteurs Taille Id Type
    /dev/sda1 2048 3907028991 3907026944 1,8T 83 Linux

    Disque /dev/sdb : 1,82 TiB, 2000398934016 octets, 3907029168 secteurs
    Modèle de disque : ST2000DM008-2FR1
    Unités : secteur de 1 × 512 = 512 octets
    Taille de secteur (logique / physique) : 512 octets / 4096 octets
    taille d'E/S (minimale / optimale) : 4096 octets / 4096 octets
    Type d'étiquette de disque : dos
    Identifiant de disque : 0x7c91a79a

    Périphérique Amorçage Début Fin Secteurs Taille Id Type
    /dev/sdb1 2048 3907028991 3907026944 1,8T 83 Linux

    ...............................

    Well, i don't know if my partitions are aligned or mis-aligned... How could
    i get it ?

    Cheers,

    --
    Jacques

    <div dir="ltr"><div dir="ltr"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><
    <br></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Le lun. 27 déc. 2021 à 01:44, Wols Lists &lt;<a href="mailto:antlists@youngman.org.uk">
    antlists@youngman.org.uk</a>&gt; a écrit :<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 26/12/2021 18:50, Jacques Montier wrote:<br>
    &gt; Hello all,<br>
    &gt; <br>
    &gt; I update to the last stable kernel 5.15.11-gentoo with the same <br> &gt; configuration as the old kernel and now, the boot time is quite long.<br> &gt; <br>
    &gt; Test :<br>
    &gt; 5.10.76-gentoo-r1 kernel : boot time 30s<br>
    &gt; 5.15.11-gentoo kernel : boot time 70s<br>
    &gt; <br>
    &gt; My setup (non EFI) :<br>
    &gt; - SSD 250 Go :  /dev/sdd1 ext2 for boot and /dev/sdd2 ext4 for /<br>
    &gt; - SSD 250 Go /dev/sdc1 ext4 for home<br>
    &gt; - Two 2T sata disks Seagate BarraCuda 3.5 /dev/sda1 ext4 for data and <br> &gt; /dev/sdb1 ext4 for data backup (Not Raid)<br>
    &gt; <br>
    &gt; With the new kernel, the two Seagate disks seem to make the boot time <br>
    &gt; quite longer.<br>
    &gt; <br>
    &gt; Test :<br>
    &gt; booting without mounting the disks : 20s<br>
    &gt; booting with mounting only one disk : 25s<br>
    &gt; booting with both disks : more than 60s<br>
    &gt; <br>
    &gt; Testing the disks :<br>
    &gt; - smartctl -s on -a /dev/sda and smartctl -s on -a /dev/sdb : No error <br>
    &gt; reported.<br>
    &gt; - fsck -a /dev/sda1 and fsck -a /dev/sdb1 : clean<br>
    &gt; - e2fsck -cfpv /dev/sda1 : clean<br>
    &gt; <br>
    &gt; Nevertheless, dmesg shows a lot of errors (attached image) with the new <br>
    &gt; kernel.<br>
    &gt; Those errors do not appear with 5.10.76-gentoo-r1 kernel.<br>
    &gt; <br>
    &gt; I&#39;m rather confused...<br>
    &gt; Have you any idea ?<br>
    &gt; <br>
    What does fdisk print say? Are your partitions mis-aligned?<br>

    Unlikely, but it depends how long ago they were partitioned. There&#39;s all <br>
    this stuff about switching from 512B to 4K sectors and that *could* be <br>
    the problem.<br>

    Cheers,<br>
    Wol<br>
    <br></blockquote><div><br></div><div>Thanks Wol,</div><div><br></div><div>The output of fdisk for my two 2T disks :</div><div><br></div><div>...........................</div>Disque /dev/sda : 1,82 TiB, 2000398934016 octets, 3907029168 secteurs<br>Modè
    le de disque : ST2000DM008-2FR1<br>Unités : secteur de 1 × 512 = 512 octets<br>Taille de secteur (logique / physique) : 512 octets / 4096 octets<br>taille d&#39;E/S (minimale / optimale) : 4096 octets / 4096 octets<br>Type d&#39;étiquette de
    disque : dos<br>Identifiant de disque : 0xe0e26837<br><br>Périphérique Amorçage Début        Fin   Secteurs Taille Id Type<br>/dev/sda1              2048 3907028991 3907026944   1,8T 83 Linux<br><br>Disque /dev/sdb : 1,82 TiB,
    2000398934016 octets, 3907029168 secteurs<br>Modèle de disque : ST2000DM008-2FR1<br>Unités : secteur de 1 × 512 = 512 octets<br>Taille de secteur (logique / physique) : 512 octets / 4096 octets<br>taille d&#39;E/S (minimale / optimale) : 4096
    octets / 4096 octets<br>Type d&#39;étiquette de disque : dos<br>Identifiant de disque : 0x7c91a79a<br><br>Périphérique Amorçage Début        Fin   Secteurs Taille Id Type<br>/dev/sdb1              2048 3907028991 3907026944   1,8T
    83 Linux</div><div class="gmail_quote"><br></div><div class="gmail_quote">...............................</div><div class="gmail_quote"><br></div><div class="gmail_quote">Well, i don&#39;t know if my partitions are aligned or mis-aligned... How could i
    get it ?</div><div class="gmail_quote"><br></div><div class="gmail_quote">Cheers,</div><div class="gmail_quote"><br></div><div class="gmail_quote">--</div><div class="gmail_quote">Jacques</div><div class="gmail_quote"><br></div><div class="gmail_quote"><
    <div> </div></div></div>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Wols Lists@21:1/5 to Jacques Montier on Mon Dec 27 12:40:02 2021
    On 27/12/2021 11:07, Jacques Montier wrote:
    Well, i don't know if my partitions are aligned or mis-aligned... How
    could i get it ?

    fdisk would have spewed a bunch of warnings. So you're okay.

    I'm not sure of the details, but it's the classic "off by one" problem -
    if there's a mismatch between the kernel block size and the disk block
    size any writes required doing a read-update-write cycle which of course knackered performance. I had that hit a while back.

    But seeing as fdisk isn't moaning, that isn't the problem ...

    Cheers,
    Wol

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael@21:1/5 to All on Mon Dec 27 13:40:50 2021
    On Monday, 27 December 2021 11:32:39 GMT Wols Lists wrote:
    On 27/12/2021 11:07, Jacques Montier wrote:
    Well, i don't know if my partitions are aligned or mis-aligned... How
    could i get it ?

    fdisk would have spewed a bunch of warnings. So you're okay.

    I'm not sure of the details, but it's the classic "off by one" problem -
    if there's a mismatch between the kernel block size and the disk block
    size any writes required doing a read-update-write cycle which of course knackered performance. I had that hit a while back.

    But seeing as fdisk isn't moaning, that isn't the problem ...

    Cheers,
    Wol

    I also thought of misaligned boundaries when I first saw the error, but the mention of Seagate by the OP pointed me to another edge case which crept up with zstd compression on ZFS. I'm mentioning it here in case it is relevant:

    https://livelace.ru/posts/2021/Jul/19/unaligned-write-command/

    HTH,
    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCAAdFiEEXqhvaVh2ERicA8Ceseqq9sKVZxkFAmHJwmIACgkQseqq9sKV ZxlVpRAA1zbcFgFnDVquujDOSfluu/l534Q7AqNcj5rVsBLrb/NfvqyxRl/OOAHb hbxJIFHQn7IyaN+NniRq1/8eatiaGMmZ4kOCZgYo9Zh05eS/hU5HLd3/5cCI4kAA 0K1FOvLfb7wrP4lGi9GLFV0FEWrBZKNIDnTbPdqEBGycgC+OOpoT90g/NC5vxCXH tyO7pLo3vBGsNxsiYvDjq0N0cARpZynj9qrQMeaY4je7oZTCqqX9abQMZ1yhL+tx Zgg9mBIshw+JdEPplIfCaQitOSz9J5cn4nHvndEx4GyJwh49qN5VZdPsAz2b6HVl sh6Q36rIbQEMT0hYzgnS3/HAi89XWUfpDGdaxwPDhjkd8YXA88keoY3DZihW/aTu 0s7bR8jZsu1Be4SbzjESTdBU4hmF77OytDBTJgkagEKZkKJ2zYCWFC9Raq/Hm9Ne XV2+EiMma0m/jyAdnlR+6HvZTiY13e9IYICIhjfeY0vr+PKgUF2AVtml6WARGGIf bzIzjMk/ERxglpk1wIzeIis4dil6vaeGrfEjXE/J3TOmXtDZHGf40EGTzllYliAG lie2f+2Lzquj0f4r1JGGl/sZ6E0/ZN9ibfvCkCg9aXe+lVAaqJmFUDuErHrANQtu KHf7Mq9xQI4F9lPVRkRiy7zDazNy9NEyPQigEFFWpwqd5X9TprE=
    =aMOJ
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Wols Lists@21:1/5 to Michael on Mon Dec 27 14:50:01 2021
    On 27/12/2021 13:40, Michael wrote:
    On Monday, 27 December 2021 11:32:39 GMT Wols Lists wrote:
    On 27/12/2021 11:07, Jacques Montier wrote:
    Well, i don't know if my partitions are aligned or mis-aligned... How
    could i get it ?

    fdisk would have spewed a bunch of warnings. So you're okay.

    I'm not sure of the details, but it's the classic "off by one" problem -
    if there's a mismatch between the kernel block size and the disk block
    size any writes required doing a read-update-write cycle which of course
    knackered performance. I had that hit a while back.

    But seeing as fdisk isn't moaning, that isn't the problem ...

    Cheers,
    Wol

    I also thought of misaligned boundaries when I first saw the error, but the mention of Seagate by the OP pointed me to another edge case which crept up with zstd compression on ZFS. I'm mentioning it here in case it is relevant:

    https://livelace.ru/posts/2021/Jul/19/unaligned-write-command/

    that might be of interest to me ... I'm getting system lockups but it's
    not an SSD. I've got two IronWolves and a Barracuda.

    But I notice the OP has a Barra*C*uda. Note the different spelling.
    That's a shingled drive I believe, which shouldn't make a lot of
    difference in light usage, but you don't want to hammer it!

    Cheers,
    Wol

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dale@21:1/5 to Wols Lists on Mon Dec 27 15:20:02 2021
    Wols Lists wrote:
    On 27/12/2021 13:40, Michael wrote:
    On Monday, 27 December 2021 11:32:39 GMT Wols Lists wrote:
    On 27/12/2021 11:07, Jacques Montier wrote:
    Well, i don't know if my partitions are aligned or mis-aligned... How
    could i get it ?

    fdisk would have spewed a bunch of warnings. So you're okay.

    I'm not sure of the details, but it's the classic "off by one"
    problem -
    if there's a mismatch between the kernel block size and the disk block
    size any writes required doing a read-update-write cycle which of
    course
    knackered performance. I had that hit a while back.

    But seeing as fdisk isn't moaning, that isn't the problem ...

    Cheers,
    Wol

    I also thought of misaligned boundaries when I first saw the error,
    but the
    mention of Seagate by the OP pointed me to another edge case which
    crept up
    with zstd compression on ZFS.  I'm mentioning it here in case it is
    relevant:

    https://livelace.ru/posts/2021/Jul/19/unaligned-write-command/

    that might be of interest to me ... I'm getting system lockups but
    it's not an SSD. I've got two IronWolves and a Barracuda.

    But I notice the OP has a Barra*C*uda. Note the different spelling.
    That's a shingled drive I believe, which shouldn't make a lot of
    difference in light usage, but you don't want to hammer it!

    Cheers,
    Wol



    I don't recall seeing this mentioned but this may be part of the issue
    unless I'm missing something that rules this out.  Could it be a drive
    is a SMR drive?  I recently made a new backup after wiping out the
    drive.  I know the backup drive is a SMR drive.  At first, it copied at
    a fairly normal speed but after a short time frame, it started slowing
    down.  At times, it would do only about 50 to 60MBs/sec.  It started out
    at well over 100MBs/sec which is fairly normal for this rig.  I would
    stop the copy process, let it catch up and restart just to give it some
    time to process.  I can't say it was any faster that way tho. 

    The way I noticed my drive was SMR, I could feel the heads going back
    and forth by putting my hand on the enclosure.  It had a bumpy feel to
    it.  You can't really hear it tho.  If you can feel those little bumps
    even when the drive isn't mounted, I'd be thinking it is a SMR drive. 
    There are also sites that you can look this sort of thing up on too.  If needed, I can go dig out some links. 

    Just thought it worth a mention.

    Dale

    :-)  :-) 

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Freeman@21:1/5 to antlists@youngman.org.uk on Mon Dec 27 15:20:01 2021
    On Mon, Dec 27, 2021 at 8:46 AM Wols Lists <antlists@youngman.org.uk> wrote:

    On 27/12/2021 13:40, Michael wrote:
    On Monday, 27 December 2021 11:32:39 GMT Wols Lists wrote:
    On 27/12/2021 11:07, Jacques Montier wrote:
    Well, i don't know if my partitions are aligned or mis-aligned... How
    could i get it ?

    fdisk would have spewed a bunch of warnings. So you're okay.

    I'm not sure of the details, but it's the classic "off by one" problem - >> if there's a mismatch between the kernel block size and the disk block
    size any writes required doing a read-update-write cycle which of course >> knackered performance. I had that hit a while back.

    But seeing as fdisk isn't moaning, that isn't the problem ...

    Cheers,
    Wol

    I also thought of misaligned boundaries when I first saw the error, but the mention of Seagate by the OP pointed me to another edge case which crept up with zstd compression on ZFS. I'm mentioning it here in case it is relevant:

    https://livelace.ru/posts/2021/Jul/19/unaligned-write-command/

    that might be of interest to me ... I'm getting system lockups but it's
    not an SSD. I've got two IronWolves and a Barracuda.

    But I notice the OP has a Barra*C*uda. Note the different spelling.
    That's a shingled drive I believe, which shouldn't make a lot of
    difference in light usage, but you don't want to hammer it!

    I've run into this issue and I've seen rare reports of it online, but
    no sign of resolution. I'm pretty sure it is some sort of bug in the
    kernel. I've tended to see it under load, and mostly when using zfs.
    I do not use zstd compression and do not have any zvols on the pools
    that had this issue. So, either there are multiple problems, or that
    linked post did not correctly identify the root cause (which seems
    likely). I'm guessing it is triggered under load and perhaps using
    zstd compression helps create that load.

    I haven't seen it much lately - probably because I've shifted a lot of
    my load to lizardfs and also I'm using USB3 hard drives for the bulk
    of my storage and since these seem to be ATA errors the removal of the
    SATA host and associated drivers may bypass the problem.

    I doubt this has anything to do with physical/logical sector size and
    partition alignment. The disks should still work correctly if the
    physical sectors aren't aligned - they should just have performance degradation. In any case, all my drives are aligned on physical
    sector boundaries. I'm not familiar enough with ATA to understand
    what the actual errors are referring to.

    Here is an example of one of the errors I've had in the past from one
    of these situations. A zpool scrub usually clears up any damage and
    then the drive works normally until the issue happens again (which
    hasn't happened in quite a while for me now). I have a dump of the
    SMART logs and the kernel ring buffer:

    ATA Error Count: 1
    CR = Command Register [HEX]
    FR = Features Register [HEX]
    SC = Sector Count Register [HEX]
    SN = Sector Number Register [HEX]
    CL = Cylinder Low Register [HEX]
    CH = Cylinder High Register [HEX]
    DH = Device/Head Register [HEX]
    DC = Device Command Register [HEX]
    ER = Error register [HEX]
    ST = Status register [HEX]
    Powered_Up_Time is measured from power on, and printed as
    DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
    SS=sec, and sss=millisec. It "wraps" after 49.710 days.

    Error 1 occurred at disk power-on lifetime: 12838 hours (534 days + 22 hours)
    When the command that caused the error occurred, the device was
    active or idle.

    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    -- -- -- -- -- -- --
    84 51 e0 88 cc c3 06 Error: ICRC, ABRT at LBA = 0x06c3cc88 = 113495176

    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
    -- -- -- -- -- -- -- -- ---------------- --------------------
    61 00 c0 68 cb c3 40 08 2d+00:45:18.962 WRITE FPDMA QUEUED
    60 00 b8 98 67 00 40 08 2d+00:45:18.917 READ FPDMA QUEUED
    60 00 b0 98 65 00 40 08 2d+00:45:18.916 READ FPDMA QUEUED
    60 00 a8 98 66 00 40 08 2d+00:45:18.916 READ FPDMA QUEUED
    61 00 a0 68 ca c3 40 08 2d+00:45:18.879 WRITE FPDMA QUEUED

    [354064.268896] ata6.00: exception Emask 0x11 SAct 0x1000000 SErr
    0x480000 action 0x6 frozen
    [354064.268907] ata6.00: irq_stat 0x48000008, interface fatal error [354064.268910] ata6: SError: { 10B8B Handshk }
    [354064.268915] ata6.00: failed command: WRITE FPDMA QUEUED
    [354064.268919] ata6.00: cmd 61/00:c0:68:cb:c3/07:00:06:01:00/40 tag
    24 ncq dma 917504 out
    res 50/00:00:68:cb:c3/00:07:06:01:00/40 Emask
    0x10 (ATA bus error)
    [354064.268922] ata6.00: status: { DRDY }
    [354064.268926] ata6: hard resetting link
    [354064.731093] ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [354064.734739] ata6.00: configured for UDMA/133
    [354064.734759] sd 5:0:0:0: [sdc] tag#24 FAILED Result:
    hostbyte=DID_OK driverbyte=DRIVER_SENSE
    [354064.734764] sd 5:0:0:0: [sdc] tag#24 Sense Key : Illegal Request [current] [354064.734767] sd 5:0:0:0: [sdc] tag#24 Add. Sense: Unaligned write command [354064.734771] sd 5:0:0:0: [sdc] tag#24 CDB: Write(16) 8a 00 00 00 00
    01 06 c3 cb 68 00 00 07 00 00 00
    [354064.734774] print_req_error: I/O error, dev sdc, sector 4408462184 [354064.734791] ata6: EH complete


    --
    Rich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From William Kenworthy@21:1/5 to Dale on Tue Dec 28 10:50:01 2021
    A point to keep in mind - if you can feel the drive moving it may be
    generating errors!  Depending on the drive, the errors may just be
    handled internally and I can see it slowing things down though probably
    would be barely noticeable.  I have seen it myself with random errors
    from a WD green drive disappearing when properly immobilised.  When investigating I ran across articles discussing the problem, one of which fastened the drives to a granite slab for tests!  Also see discussions
    on NAS seups and vibrations affecting co located drives.

    BillK

    ** Interesting read https://www.ept.ca/features/everything-need-know-hard-drive-vibration/


    On 27/12/21 22:15, Dale wrote:
    Wols Lists wrote:
    On 27/12/2021 13:40, Michael wrote:
    On Monday, 27 December 2021 11:32:39 GMT Wols Lists wrote:
    On 27/12/2021 11:07, Jacques Montier wrote:
    Well, i don't know if my partitions are aligned or mis-aligned... How >>>>> could i get it ?
    fdisk would have spewed a bunch of warnings. So you're okay.

    I'm not sure of the details, but it's the classic "off by one"
    problem -
    if there's a mismatch between the kernel block size and the disk block >>>> size any writes required doing a read-update-write cycle which of
    course
    knackered performance. I had that hit a while back.

    But seeing as fdisk isn't moaning, that isn't the problem ...

    Cheers,
    Wol
    I also thought of misaligned boundaries when I first saw the error,
    but the
    mention of Seagate by the OP pointed me to another edge case which
    crept up
    with zstd compression on ZFS.  I'm mentioning it here in case it is
    relevant:

    https://livelace.ru/posts/2021/Jul/19/unaligned-write-command/

    that might be of interest to me ... I'm getting system lockups but
    it's not an SSD. I've got two IronWolves and a Barracuda.

    But I notice the OP has a Barra*C*uda. Note the different spelling.
    That's a shingled drive I believe, which shouldn't make a lot of
    difference in light usage, but you don't want to hammer it!

    Cheers,
    Wol


    I don't recall seeing this mentioned but this may be part of the issue
    unless I'm missing something that rules this out.  Could it be a drive
    is a SMR drive?  I recently made a new backup after wiping out the
    drive.  I know the backup drive is a SMR drive.  At first, it copied at
    a fairly normal speed but after a short time frame, it started slowing down.  At times, it would do only about 50 to 60MBs/sec.  It started out
    at well over 100MBs/sec which is fairly normal for this rig.  I would
    stop the copy process, let it catch up and restart just to give it some
    time to process.  I can't say it was any faster that way tho.

    The way I noticed my drive was SMR, I could feel the heads going back
    and forth by putting my hand on the enclosure.  It had a bumpy feel to
    it.  You can't really hear it tho.  If you can feel those little bumps
    even when the drive isn't mounted, I'd be thinking it is a SMR drive.
    There are also sites that you can look this sort of thing up on too.  If needed, I can go dig out some links.

    Just thought it worth a mention.

    Dale

    :-)  :-)


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Wols Lists@21:1/5 to William Kenworthy on Tue Dec 28 11:30:01 2021
    On 28/12/2021 09:30, William Kenworthy wrote:
    A point to keep in mind - if you can feel the drive moving it may be generating errors!  Depending on the drive, the errors may just be
    handled internally and I can see it slowing things down though probably
    would be barely noticeable.  I have seen it myself with random errors
    from a WD green drive disappearing when properly immobilised.  When investigating I ran across articles discussing the problem, one of which fastened the drives to a granite slab for tests!  Also see discussions
    on NAS seups and vibrations affecting co located drives.

    BillK

    ** Interesting read https://www.ept.ca/features/everything-need-know-hard-drive-vibration/

    Have you got the link to the follow-on article? I'd love to put them
    both on the linux raid wiki.

    Cheers,
    Wol

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dale@21:1/5 to William Kenworthy on Tue Dec 28 13:40:03 2021
    William Kenworthy wrote:
    A point to keep in mind - if you can feel the drive moving it may be generating errors!  Depending on the drive, the errors may just be
    handled internally and I can see it slowing things down though
    probably would be barely noticeable.  I have seen it myself with
    random errors from a WD green drive disappearing when properly
    immobilised.  When investigating I ran across articles discussing the problem, one of which fastened the drives to a granite slab for
    tests!  Also see discussions on NAS seups and vibrations affecting co located drives.

    BillK

    ** Interesting read https://www.ept.ca/features/everything-need-know-hard-drive-vibration/


    This is just because it is a SMR drive.  It's done this ever since I
    bought the drive and it has passed all tests.  There's a whole thread on
    this dating back several years.  I managed to buy a SMR drive before I
    even knew they existed.  Once it fills up that PMR section, it gets
    really slow. 

    Dale

    :-)  :-) 

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jacques Montier@21:1/5 to All on Tue Dec 28 14:10:02 2021
    Le mar. 28 déc. 2021 à 13:32, Dale <rdalek1967@gmail.com> a écrit :

    William Kenworthy wrote:
    A point to keep in mind - if you can feel the drive moving it may be generating errors! Depending on the drive, the errors may just be
    handled internally and I can see it slowing things down though
    probably would be barely noticeable. I have seen it myself with
    random errors from a WD green drive disappearing when properly
    immobilised. When investigating I ran across articles discussing the problem, one of which fastened the drives to a granite slab for
    tests! Also see discussions on NAS seups and vibrations affecting co located drives.

    BillK

    ** Interesting read https://www.ept.ca/features/everything-need-know-hard-drive-vibration/


    This is just because it is a SMR drive. It's done this ever since I
    bought the drive and it has passed all tests. There's a whole thread on
    this dating back several years. I managed to buy a SMR drive before I
    even knew they existed. Once it fills up that PMR section, it gets
    really slow.

    Dale

    :-) :-)


    Hello all,

    Thanks a lot for all your responses !
    I think this issue is kernel related.
    No problem with 5.10.76-gentoo-r1, but the issue appears
    with 5.15.11-gentoo.

    I read on the net that it could be possible to desactivate the sata
    protocol NCQ (Native Command Queuing)
    So, in the grub file, i added the
    ligne GRUB_CMDLINE_LINUX=libata.force=noncq
    Now, all the errors messages are gone and the booting time gets down to 24s with the two kernel versions.
    BUT : do you think it could damage or slow down my SSD and HDD disks ?

    Thanks again,

    Regards,

    --
    Jacques

    <div dir="ltr"><div dir="ltr"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><
    <i><br></i></div></div></div></div></div></div></div></div></div></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Le mar. 28 déc. 2021 à 13:32, Dale &lt;<a href="mailto:rdalek1967@gmail.com">
    rdalek1967@gmail.com</a>&gt; a écrit :<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">William Kenworthy wrote:<br>
    &gt; A point to keep in mind - if you can feel the drive moving it may be<br> &gt; generating errors!  Depending on the drive, the errors may just be<br> &gt; handled internally and I can see it slowing things down though<br>
    &gt; probably would be barely noticeable.  I have seen it myself with<br>
    &gt; random errors from a WD green drive disappearing when properly<br>
    &gt; immobilised.  When investigating I ran across articles discussing the<br> &gt; problem, one of which fastened the drives to a granite slab for<br>
    &gt; tests!  Also see discussions on NAS seups and vibrations affecting co<br> &gt; located drives.<br>
    &gt;<br>
    &gt; BillK<br>
    &gt;<br>
    &gt; ** Interesting read<br>
    &gt; <a href="https://www.ept.ca/features/everything-need-know-hard-drive-vibration/" rel="noreferrer" target="_blank">https://www.ept.ca/features/everything-need-know-hard-drive-vibration/</a><br>
    &gt;<br>

    This is just because it is a SMR drive.  It&#39;s done this ever since I<br> bought the drive and it has passed all tests.  There&#39;s a whole thread on<br>
    this dating back several years.  I managed to buy a SMR drive before I<br> even knew they existed.  Once it fills up that PMR section, it gets<br>
    really slow. <br>

    Dale<br>

    :-)  :-) <br>
    <br></blockquote><div><br></div><div>Hello all,</div><div><br></div><div>Thanks a lot for all your responses !</div><div>I think this issue is kernel related.</div><div>No problem with 5.10.76-gentoo-r1, but the issue appears with 5.15.11-gentoo.</div><
    <br></div><div>I read on the net that it could be possible to desactivate the sata protocol NCQ (Native Command Queuing)</div><div>So, in the grub file, i added the ligne GRUB_CMDLINE_LINUX=libata.force=noncq</div><div>Now, all the errors messages
    are gone and the booting time gets down to 24s with the two kernel versions.</div><div>BUT : do you think it could damage or slow down my SSD and HDD disks ?</div><div><br></div><div>Thanks again,</div><div><br></div><div>Regards,</div><div><br></div><
    --</div><div>Jacques</div><div><br></div><div><br></div><div><br></div><div> </div></div></div>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jacques Montier@21:1/5 to All on Tue Dec 28 14:40:02 2021
    Le mar. 28 déc. 2021 à 14:03, Jacques Montier <jmontier@gmail.com> a écrit :




    Le mar. 28 déc. 2021 à 13:32, Dale <rdalek1967@gmail.com> a écrit :

    William Kenworthy wrote:
    A point to keep in mind - if you can feel the drive moving it may be
    generating errors! Depending on the drive, the errors may just be
    handled internally and I can see it slowing things down though
    probably would be barely noticeable. I have seen it myself with
    random errors from a WD green drive disappearing when properly
    immobilised. When investigating I ran across articles discussing the
    problem, one of which fastened the drives to a granite slab for
    tests! Also see discussions on NAS seups and vibrations affecting co
    located drives.

    BillK

    ** Interesting read
    https://www.ept.ca/features/everything-need-know-hard-drive-vibration/


    This is just because it is a SMR drive. It's done this ever since I
    bought the drive and it has passed all tests. There's a whole thread on
    this dating back several years. I managed to buy a SMR drive before I
    even knew they existed. Once it fills up that PMR section, it gets
    really slow.

    Dale

    :-) :-)


    Hello all,

    Thanks a lot for all your responses !
    I think this issue is kernel related.
    No problem with 5.10.76-gentoo-r1, but the issue appears
    with 5.15.11-gentoo.

    I read on the net that it could be possible to desactivate the sata
    protocol NCQ (Native Command Queuing)
    So, in the grub file, i added the
    ligne GRUB_CMDLINE_LINUX=libata.force=noncq
    Now, all the errors messages are gone and the booting time gets down to
    24s with the two kernel versions.
    BUT : do you think it could damage or slow down my SSD and HDD disks ?

    Thanks again,

    Regards,

    --
    Jacques



    Me again !

    Well, il cleaned my dusty mobo, unplugged and plugged again the sata cables. Now, with or without NCQ, boot time is rather short (~28s).
    So it seems it was a connection problem.

    I still have some errors as :

    ............................................
    [ 24.708377] ata6.00: exception Emask 0x10 SAct 0x50400001 SErr 0x4010000 action 0xe frozen
    [ 24.708385] ata6.00: irq_stat 0x00400040, connection status changed
    [ 24.708387] ata6: SError: { PHYRdyChg DevExch }
    [ 24.708390] ata6.00: failed command: READ FPDMA QUEUED
    [ 24.708391] ata6.00: cmd 60/08:00:78:08:c0/00:00:31:00:00/40 tag 0 ncq
    dma 4096 in
    res 40/00:00:78:08:c0/00:00:31:00:00/40 Emask 0x10
    (ATA bus error)
    [ 24.708397] ata6.00: status: { DRDY } ..........................................

    To be sure, i'll buy some news sata cables.

    Sorry for the noise and thanks again for having helped me.

    Regards,

    --
    Jacques








    <div dir="ltr"><div dir="ltr"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><
    <i><br></i></div></div></div></div></div></div></div></div></div></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Le mar. 28 déc. 2021 à 14:03, Jacques Montier &lt;<a href="mailto:jmontier@
    gmail.com">jmontier@gmail.com</a>&gt; a écrit :<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div><div
    dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div><i><br></i></div></div></div></div></div></div></div></div></div></div></div></div></div></div><br></div><br><div class="gmail_
    quote"><div dir="ltr" class="gmail_attr">Le mar. 28 déc. 2021 à 13:32, Dale &lt;<a href="mailto:rdalek1967@gmail.com" target="_blank">rdalek1967@gmail.com</a>&gt; a écrit :<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;
    border-left:1px solid rgb(204,204,204);padding-left:1ex">William Kenworthy wrote:<br>
    &gt; A point to keep in mind - if you can feel the drive moving it may be<br> &gt; generating errors!  Depending on the drive, the errors may just be<br> &gt; handled internally and I can see it slowing things down though<br>
    &gt; probably would be barely noticeable.  I have seen it myself with<br>
    &gt; random errors from a WD green drive disappearing when properly<br>
    &gt; immobilised.  When investigating I ran across articles discussing the<br> &gt; problem, one of which fastened the drives to a granite slab for<br>
    &gt; tests!  Also see discussions on NAS seups and vibrations affecting co<br> &gt; located drives.<br>
    &gt;<br>
    &gt; BillK<br>
    &gt;<br>
    &gt; ** Interesting read<br>
    &gt; <a href="https://www.ept.ca/features/everything-need-know-hard-drive-vibration/" rel="noreferrer" target="_blank">https://www.ept.ca/features/everything-need-know-hard-drive-vibration/</a><br>
    &gt;<br>

    This is just because it is a SMR drive.  It&#39;s done this ever since I<br> bought the drive and it has passed all tests.  There&#39;s a whole thread on<br>
    this dating back several years.  I managed to buy a SMR drive before I<br> even knew they existed.  Once it fills up that PMR section, it gets<br>
    really slow. <br>

    Dale<br>

    :-)  :-) <br>
    <br></blockquote><div><br></div><div>Hello all,</div><div><br></div><div>Thanks a lot for all your responses !</div><div>I think this issue is kernel related.</div><div>No problem with 5.10.76-gentoo-r1, but the issue appears with 5.15.11-gentoo.</div><
    <br></div><div>I read on the net that it could be possible to desactivate the sata protocol NCQ (Native Command Queuing)</div><div>So, in the grub file, i added the ligne GRUB_CMDLINE_LINUX=libata.force=noncq</div><div>Now, all the errors messages
    are gone and the booting time gets down to 24s with the two kernel versions.</div><div>BUT : do you think it could damage or slow down my SSD and HDD disks ?</div><div><br></div><div>Thanks again,</div><div><br></div><div>Regards,</div><div><br></div><
    --</div><div>Jacques</div><div><br></div><div><br></div><div><br></div></div></div></blockquote><div>Me again !</div><div><br></div><div>Well, il cleaned my dusty mobo, unplugged and plugged again the sata cables.</div><div>Now, with or without NCQ,
    boot time is rather short (~28s).</div><div>So it seems it was a connection problem.</div><div><br></div><div>I still have some errors as :</div><div><br></div><div>............................................</div><div>[   24.708377] ata6.00:
    exception Emask 0x10 SAct 0x50400001 SErr 0x4010000 action 0xe frozen<br>[   24.708385] ata6.00: irq_stat 0x00400040, connection status changed<br>[   24.708387] ata6: SError: { PHYRdyChg DevExch }<br>[   24.708390] ata6.00: failed command: READ FPDMA
    QUEUED<br>[   24.708391] ata6.00: cmd 60/08:00:78:08:c0/00:00:31:00:00/40 tag 0 ncq dma 4096 in<br>                        res 40/00:00:78:08:c0/00:00:31:00:00/40 Emask 0x10 (ATA bus error)<br>[   24.708397] ata6.00: status: { DRDY }</div><
    ..........................................<br></div><div><br></div><div>To be sure, i&#39;ll buy some news sata cables.</div><div><br></div><div>Sorry for the noise and thanks again for having helped me.</div><div><br></div><div>Regards,</div><div><
    </div><div>--</div><div>Jacques</div><div><br></div><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_
    quote"><div></div><div> </div></div></div>
    </blockquote></div></div>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Freeman@21:1/5 to jmontier@gmail.com on Tue Dec 28 16:40:01 2021
    On Tue, Dec 28, 2021 at 8:32 AM Jacques Montier <jmontier@gmail.com> wrote:

    Well, il cleaned my dusty mobo, unplugged and plugged again the sata cables. Now, with or without NCQ, boot time is rather short (~28s).
    So it seems it was a connection problem.

    Yeah, I've suspected my cables for some of these ATA errors. I do
    feel like SATA is probably a bit lacking in error management, but I
    haven't looked into the gory details.

    That said, without NCQ the drive should work completely normally but
    you may get degraded performance. The idea of NCQ is that the kernel
    can feed the drive multiple instructions at a time, and then the drive
    firmware can optimize order of execution to reduce seek time. The
    kernel doesn't know the physical layout of the disk and so if the
    commands are executed in strict order the drive may end up seeking in
    a non-optimal way, and of course with mechanical drives being what
    they are that is very costly. A bit of controller CPU spent solving
    the travelling salesman problem can save a LOT of time waiting for the
    disk and heads to move.

    One of the bigger changes in NVMe is making the queue vastly larger so
    that more operations can be done in parallel. Obviously there are no
    seek times in flash, but I'm not familiar with how the actual
    addressing works so there may still be cases where the order of access
    matters, or where dispatching instructions in parallel keeps the
    pipeline more full.

    --
    Rich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Steinmetzger@21:1/5 to All on Sun Jan 2 20:30:01 2022
    Am Mon, Dec 27, 2021 at 08:15:51AM -0600 schrieb Dale:

    I don't recall seeing this mentioned but this may be part of the issue
    unless I'm missing something that rules this out.  Could it be a drive
    is a SMR drive?


    SMR may slow down drive response time and throughput, but it should never generate I/O errors in the syslog. If resetting or swapping the SATA cables does not help, then I’d suspect the drive going bad. A long selftest might
    be in order (smartctl -t long). smartctl -a shows how long this will take approximately (it’s rather accurate).

    For my PC’s rust drive (1 TB WD Blue) it says:
    Extended self-test routine
    recommended polling time: ( 113) minutes.


    --
    Grüße | Greetings | Qapla’
    Please do not share anything from, with or about me on any social network.

    “If it’s true that our species is alone in the universe, then I’d have to say
    the universe aimed rather low and settled for very little.” – George Carlin

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmHR+1IACgkQizG+tUDU MMrHUA//TTAchxbLunDEPBaKScAJGRP7+7CaDEoRkL8WBU2W/gevDr3fVF9RJq0C hcixl600FExe+L3fCNK8mDn6nWGcS3i2jjYRrfOId4mSax9XKSRbI87MuXZQHonS NP7+nY+DrzAiL8xNr6Aa8v6qytPc7gop9xdGNxi9Id4+TiZloxosdI5g3yaf39EP bBFSMcxWu5yZuShJPe7XpqAu/PIIokDpsC1FXDFvicH3JEzxq3An7xuKRG6wsXTF nJbF168G1cAZ5Zyb+Ydi3k3qS8JSOmAMroq+jGymaN4wP/wO7iRK4O83nsZpa42z Xl+UWtCSKeK2qd11HZ+h57uqWqI2zOhIHK9PDevWgSBfEIWZgQE/xUS3KUoL1IWY 7JEftGaipQai5HNut4wIv42J8hlcwZdc5607qe9QEgKNL0/GLe5OGdBh4tlaWQeI uUBam8ihGhd7H1710Dupz8Vi8i5AUYpFgEFprewjDaq1YzL+iR2oB4WIPLfg7fcn RcR3x70Ip+SRgab92xU7PhPDnza96irUoFT5J+5VkZ3gkv3c3z/EGqZAQamDVHBT UPYH1BMNw4buiWOVh1f37tLzlAbwbmaH4FI7AyYGUWxy68dr
  • From Dale@21:1/5 to Frank Steinmetzger on Sun Jan 2 20:40:01 2022
    Frank Steinmetzger wrote:
    Am Mon, Dec 27, 2021 at 08:15:51AM -0600 schrieb Dale:

    I don't recall seeing this mentioned but this may be part of the issue
    unless I'm missing something that rules this out.  Could it be a drive
    is a SMR drive?

    SMR may slow down drive response time and throughput, but it should never generate I/O errors in the syslog. If resetting or swapping the SATA cables does not help, then I’d suspect the drive going bad. A long selftest might be in order (smartctl -t long). smartctl -a shows how long this will take approximately (it’s rather accurate).

    For my PC’s rust drive (1 TB WD Blue) it says:
    Extended self-test routine
    recommended polling time: ( 113) minutes.




    That's true but weird things happen. 

    If it helps any, my 6Tb drive takes around 700 minutes.  My 8Tb drive
    takes around 1200 minutes.  Yea, over two days.  O_O 

    Dale

    :-)  :-) 

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Steinmetzger@21:1/5 to All on Sun Jan 2 21:40:01 2022
    Am Sun, Jan 02, 2022 at 01:38:01PM -0600 schrieb Dale:
    Frank Steinmetzger wrote:
    Am Mon, Dec 27, 2021 at 08:15:51AM -0600 schrieb Dale:

    I don't recall seeing this mentioned but this may be part of the issue
    unless I'm missing something that rules this out.  Could it be a drive
    is a SMR drive?

    SMR may slow down drive response time and throughput, but it should never generate I/O errors in the syslog. If resetting or swapping the SATA cables does not help, then I’d suspect the drive going bad.

    The original problem had already been solved (I was reading up on old mail
    from the Christmas week, go figure).

    A long selftest might be in order (smartctl -t long). smartctl -a shows
    how long this will take approximately (it’s rather accurate).

    For my PC’s rust drive (1 TB WD Blue) it says:
    Extended self-test routine
    recommended polling time: ( 113) minutes.


    If it helps any, my 6Tb drive takes around 700 minutes.  My 8Tb drive
    takes around 1200 minutes.

    Same for my 6 TB Reds in the NAS. But 1200 is a rather big increase. Did you ever try this? Almost double for only one third more capacity.

    I suspect that internally the drive can do the long selftest in parallel --
    all platters at the same time. But when going from CMR to SMR, platter count does not grow linearly with capacity. So the drive may have ⅓ more capacity, but number of platters stayed the same.

    Yea, over two days.  O_O

    Uhm, even without a calculator, I challange that. 1 hour is 60 minutes, so
    10 hours = 600 minutes, making 1200 minutes a mere 20 hours. ;-)

    Dale

    :-)  :-) 


    --
    Grüße | Greetings | Qapla’
    Please do not share anything from, with or about me on any social network.

    Keyboard not connected, press F1 to continue.

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmHSC3kACgkQizG+tUDU MMp+TA//b8OEOdrWE+X+Fey+yYWjhBLS2Tdul8TbdAElnoSTJLwIakhSwdc1ZtWY 8/NP7GlWiKlwuRzRP9esPDBtjql35u1nP/EsqunX0XjujZkTtFq5mOAGjzcx5VM2 q1HLu9z+Ur61X8lLrrL0ttVFMgUVtQtOxXWB64pbR1v1HisAYnO2m6Rc0O8NiVdy paBK2axwEUedvfl862u0X2w0gexGX/7FcpQgzYTk8bA+ur3i7nVQMMQv1xJIROqu DniDABVSULjtKbhHzou7+rvZraRBAdXyQS9sicoDOTo5yRf5WCIx/yVP5QS/hG0b SPhJTzaKpxC1FpsUQonZSAQsKY7yls5lncPltxPof3fy5Ja45brMM0+jDodJdeAm fyAAJ4JMKEbCd+wupnBAhraTQtLeXHisCimkxbyBlUEUSyg9pw6xMl98ibvFAR1P d9yk++4mDMvL515HmenFKoYbPeLGNtdXqSDHvXapKom66vb+fjRjnXxgULJkqM83 uAGIXx3lbr1dH2kNJyvFuc7n0SgURR+CY5dK9jr9Zst1+28t3UqQTwzsQwuKM4tB vf7pYDL/kftHKlWUozm0SS4OyXHIAJmHsAxWi73OvBUxMexLAmMrq8UygDCMF9EQ 1sjQ2Wx2VqlwsaLgnfGrQHzdwgDwYTHlAxv2CdcQo+RsfGn9xU4=
    =CPHu
    -----END PGP SIGNATURE-----

    --- SoupG
  • From Dale@21:1/5 to Frank Steinmetzger on Sun Jan 2 21:50:01 2022
    Frank Steinmetzger wrote:
    Am Sun, Jan 02, 2022 at 01:38:01PM -0600 schrieb Dale:

    Same for my 6 TB Reds in the NAS. But 1200 is a rather big increase. Did you ever try this? Almost double for only one third more capacity.

    I suspect that internally the drive can do the long selftest in parallel -- all platters at the same time. But when going from CMR to SMR, platter count does not grow linearly with capacity. So the drive may have ⅓ more capacity,
    but number of platters stayed the same.

    This is what it reports for the 8Tb drive.


    root@fireball / # smartctl -a /dev/sdc | grep poll
    recommended polling time:        (   2) minutes.
    recommended polling time:        (1175) minutes.
    root@fireball / #


    root@fireball / # smartctl -i /dev/sdc
    smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.14.15-gentoo] (local build) Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

    === START OF INFORMATION SECTION ===
    Model Family:     Western Digital Red
    Device Model:     WDC WD80EFZX-68UW8N0
    Serial Number:    XXXXXXXXXXXXX
    LU WWN Device Id: XXXXXXXXXXXXXX
    Firmware Version: 83.H0A83
    User Capacity:    8,001,563,222,016 bytes [8.00 TB]


    I recall running it when I first bought the drive, to be sure it was not reporting any problems after shipping etc, and that sounds about right. 
    I know it took a good long while.




    Yea, over two days.  O_O
    Uhm, even without a calculator, I challange that. 1 hour is 60 minutes, so
    10 hours = 600 minutes, making 1200 minutes a mere 20 hours. ;-)

    Dale

    :-)  :-) 



    Ooops.  I think I forgot to divide by 60 first.  Should be 1200/60/24. 
    I think I skipped the 60 part.  Math isn't always by best thing.  lol 

    Dale

    :-)  :-) 

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)