• [gentoo-user] Unable to expand ext4 partition

    From Julien Roy@21:1/5 to All on Sat Feb 5 18:50:01 2022
    Hello,

    I've been running an LVM RAID 5 on my home lab for a while, and recently it's been getting awfully close to 100% full, so I decided to buy a new drive to add to it, however, growing an LVM RAID is more complicated than I thought! I found very few
    documentation on how to do this, and settled on following some user's notes on the Arch Wiki [0]. I should've used mdadm !...
    My RAID 5 consisted of 3x6TB drives giving me a total of 12TB of usable space. I am trying to grow it to 18TB now (4x6TB -1 for parity).
    I seem to have done everything in order since I can see all 4 drives are used when I run the vgdisplay command, and lvdisplay tells me that there is 16.37TB of usable space in the logical volume.
    In fact, running fdisk -l on the lv confirms this as well :
    Disk /dev/vgraid/lvraid: 16.37 TiB

    However, the partition on it is still at 12TB (or a little bit less in HDD units) and I am unable to expand it.
    When I run the resize2fs command on the logical volume, I can see that it's doing something, and I can hear the disks doing HDD noises, but after just a few minutes (perhaps seconds), the disks turn quiet, and then a few more minutes later, resize2fs
    halts with the following error:
    doas resize2fs /dev/vgraid/lvraid
    resize2fs 1.46.4 (18-Aug-2021)
    Resizing the filesystem on /dev/vgraid/lvraid to 4395386880 (4k) blocks. resize2fs: Input/output error while trying to resize /dev/vgraid/lvraid
    Please run 'e2fsck -fy /dev/vgraid/lvraid' to fix the filesystem
    after the aborted resize operation.

    A few seconds after the resize2fs gives the "input/output" error, I can see the following lines appearing multiple times in dmesg:
    Feb  5 12:35:50 gentoo kernel: Buffer I/O error on dev dm-8, logical block 2930769920, lost async page write

    At first I was worried about data corruption or a defective drive, but I ran a smartctl test on all 4 drives and they all turn out healthy. Also, I am still capable of mounting the LVM partition and accessing all the data without any issue.
    I have then tried running the e2fsck command as instructed, which fixes some things [1], and then running the resize2fs command again, but it does the same thing every time.

    My Google skills seem to not be good enough for this one so I am hoping someone else here has an idea what is wrong...

    Thanks !
    Julien

    [0] https://wiki.archlinux.org/title/User:Ctag/Notes#Growing_LVM_Raid5 [1] doas e2fsck -fy /dev/vgraid/lvraid
    e2fsck 1.46.4 (18-Aug-2021)
    Resize inode not valid.  Recreate? yes

    Pass 1: Checking inodes, blocks, and sizes
    Inode 238814586 extent tree (at level 1) could be narrower.  Optimize? yes

    Pass 1E: Optimizing extent trees
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    Block bitmap differences:  -(2080--2096) +(2304--2305) +(2307--2321)
    Fix? yes

    Free blocks count wrong for group #0 (1863, counted=1864).
    Fix? yes


    /dev/vgraid/lvraid: ***** FILE SYSTEM WAS MODIFIED *****
    /dev/vgraid/lvraid: 199180/366284800 files (0.8% non-contiguous), 2768068728/2930257920 blocks

    <html>
    <head>
    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
    </head>
    <body>
    <div>Hello,<br></div><div dir="auto"><br></div><div dir="auto">I've been running an LVM RAID 5 on my home lab for a while, and recently it's been getting awfully close to 100% full, so I decided to buy a new drive to add to it, however, growing an LVM
    RAID is more complicated than I thought! I found very few documentation on how to do this, and settled on following some user's notes on the Arch Wiki [0]. I should've used mdadm !...<br></div><div dir="auto">My RAID 5 consisted of 3x6TB drives giving me
    a total of 12TB of usable space. I am trying to grow it to 18TB now (4x6TB -1 for parity).<br></div><div dir="auto">I seem to have done everything in order since I can see all 4 drives are used when I run the <span class="" style="font-family: monospace,
    sans-serif;">vgdisplay</span> command, and <span class="" style="font-family: monospace, sans-serif;">lvdisplay</span> tells me that there is 16.37TB of usable space in the logical volume.<br></div><div dir="auto">In fact, running <span class="" style="
    font-family: monospace, sans-serif;">fdisk -l</span> on the lv confirms this as well :<br></div><div dir="auto"><span class="" style="font-family: monospace, sans-serif;">Disk /dev/vgraid/lvraid: 16.37 TiB</span><br></div><div dir="auto"><br></div><div
    dir="auto">However, the partition on it is still at 12TB (or a little bit less in HDD units) and I am unable to expand it.<br></div><div dir="auto">When I run the <span class="" style="font-family: monospace, sans-serif;">resize2fs</span> command on the
    logical volume, I can see that it's doing something, and I can hear the disks doing HDD noises, but after just a few minutes (perhaps seconds), the disks turn quiet, and then a few more minutes later, <span class="" style="font-family: monospace, sans-
    serif;">resize2fs</span> halts with the following error:<br></div><div dir="auto"><span class="" style="font-family: monospace, sans-serif;">doas resize2fs /dev/vgraid/lvraid</span><br></div><div dir="auto"><span class="" style="font-family: monospace,
    sans-serif;">resize2fs 1.46.4 (18-Aug-2021)<br></span></div><div dir="auto"><span class="" style="font-family: monospace, sans-serif;">Resizing the filesystem on /dev/vgraid/lvraid to 4395386880 (4k) blocks.<br></span></div><div dir="auto"><span class=""
    style="font-family: monospace, sans-serif;">resize2fs: Input/output error while trying to resize /dev/vgraid/lvraid<br></span></div><div dir="auto"><span class="" style="font-family: monospace, sans-serif;">Please run 'e2fsck -fy /dev/vgraid/lvraid' to
    fix the filesystem<br></span></div><div dir="auto"><span class="" style="font-family: monospace, sans-serif;">after the aborted resize operation.</span><br></div><div dir="auto"><br></div><div dir="auto"><div dir="auto">A few seconds after the <span
    class="" style=""><span style="font-family:monospace, sans-serif" class="">resize2fs</span></span> gives the "input/output" error, I can see the following lines appearing multiple times in dmesg:<br></div><div dir="auto"><span class="" style=""><span
    style="font-family:monospace, sans-serif" class="">Feb&nbsp; 5 12:35:50 gentoo kernel: Buffer I/O error on dev dm-8, logical block 2930769920, lost async page write</span></span><br></div></div><div dir="auto"><br></div><div dir="auto">At first I was
    worried about data corruption or a defective drive, but I ran a smartctl test on all 4 drives and they all turn out healthy. Also, I am still capable of mounting the LVM partition and accessing all the data without any issue.<br></div><div dir="auto">I
    have then tried running the <span style="font-family: monospace, sans-serif;" class="">e2fsck</span> command as instructed, which fixes some things [1], and then running the <span style="font-family: monospace, sans-serif;" class="">resize2fs</span>
    command again, but it does the same thing every time.<br></div><div dir="auto"><br></div><div dir="auto">My Google skills seem to not be good enough for this one so I am hoping someone else here has an idea what is wrong...<br></div><div dir="auto"><br></
    <div dir="auto">Thanks !<br></div><div>Julien<br></div><div dir="auto"><br></div><div dir="auto">[0]&nbsp;<a target="_blank" rel="noopener noreferrer" href="https://wiki.archlinux.org/title/User:Ctag/Notes#Growing_LVM_Raid5">https://wiki.archlinux.
    org/title/User:Ctag/Notes#Growing_LVM_Raid5</a><br></div><div dir="auto">[1]&nbsp;<span class="font" style="font-family: monospace, sans-serif;">doas e2fsck -fy /dev/vgraid/lvraid</span><span class="font" style="font-family: monospace, sans-serif;"><br></
    span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;">e2fsck 1.46.4 (18-Aug-2021)</span><span class="font" style="font-family: monospace, sans-serif;"><br></span></div><div dir="auto"><span class="font" style="font-
    family: monospace, sans-serif;">Resize inode not valid.&nbsp; Recreate? yes</span><span class="font" style="font-family: monospace, sans-serif;"><br></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><br></span></
    <div dir="auto"><span class="font" style="font-family: monospace, sans-serif;">Pass 1: Checking inodes, blocks, and sizes</span><span class="font" style="font-family: monospace, sans-serif;"><br></span></div><div dir="auto"><span class="font" style="
    font-family: monospace, sans-serif;">Inode 238814586 extent tree (at level 1) could be narrower.&nbsp; Optimize? yes</span><span class="font" style="font-family: monospace, sans-serif;"><br></span></div><div dir="auto"><span class="font" style="font-
    family: monospace, sans-serif;"><br></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;">Pass 1E: Optimizing extent trees</span><span class="font" style="font-family: monospace, sans-serif;"><br></span></div><div
    dir="auto"><span class="font" style="font-family: monospace, sans-serif;">Pass 2: Checking directory structure</span><span class="font" style="font-family: monospace, sans-serif;"><br></span></div><div dir="auto"><span class="font" style="font-family:
    monospace, sans-serif;">Pass 3: Checking directory connectivity</span><span class="font" style="font-family: monospace, sans-serif;"><br></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;">Pass 4: Checking
    reference counts</span><span class="font" style="font-family: monospace, sans-serif;"><br></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;">Pass 5: Checking group summary information</span><span class="font"
    style="font-family: monospace, sans-serif;"><br></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;">Block bitmap differences:&nbsp; -(2080--2096) +(2304--2305) +(2307--2321)</span><span class="font" style="font-
    family: monospace, sans-serif;"><br></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;">Fix? yes</span><span class="font" style="font-family: monospace, sans-serif;"><br></span></div><div dir="auto"><span class="
    font" style="font-family: monospace, sans-serif;"><br></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;">Free blocks count wrong for group #0 (1863, counted=1864).</span><span class="font" style="font-family:
    monospace, sans-serif;"><br></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;">Fix? yes</span><span class="font" style="font-family: monospace, sans-serif;"><br></span></div><div dir="auto"><span class="font"
    style="font-family: monospace, sans-serif;"><br></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><br></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;">/dev/vgraid/lvraid:
    ***** FILE SYSTEM WAS MODIFIED *****</span><span class="font" style="font-family: monospace, sans-serif;"><br></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;">/dev/vgraid/lvraid: 199180/366284800 files (0.8%
    non-contiguous), 2768068728/2930257920 blocks</span><br></div> </body>
    </html>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Wol@21:1/5 to Julien Roy on Sat Feb 5 20:30:05 2022
    On 05/02/2022 17:43, Julien Roy wrote:
    Hello,

    I've been running an LVM RAID 5 on my home lab for a while, and recently
    it's been getting awfully close to 100% full, so I decided to buy a new
    drive to add to it, however, growing an LVM RAID is more complicated
    than I thought! I found very few documentation on how to do this, and
    settled on following some user's notes on the Arch Wiki [0]. I should've
    used mdadm !...
    My RAID 5 consisted of 3x6TB drives giving me a total of 12TB of usable space. I am trying to grow it to 18TB now (4x6TB -1 for parity).
    I seem to have done everything in order since I can see all 4 drives are
    used when I run the vgdisplay command, and lvdisplay tells me that there
    is 16.37TB of usable space in the logical volume.
    In fact, running fdisk -l on the lv confirms this as well :
    Disk /dev/vgraid/lvraid: 16.37 TiB

    If you'd been running mdadm I'd have been able to help ... my setup is
    ext4 over lvm over md-raid over dm-integrity over hardware...

    But you've made no mention of lvgrow or whatever it's called. Not using lv-raid, I don't know whether you put ext straight on top of the raid,
    or do you need to grow the lv volume after you've grown the raid? I know
    I'd have to grow the volume.

    Cheers,
    Wol

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Julien Roy@21:1/5 to All on Sat Feb 5 20:40:01 2022
    I'm running ext4 over the logical volume over hardware

    The steps I used to grow the logical volume are as follows:
    1- I created a physical volume on the disk using pvcreate /dev/sda (the new disk became sda and the other ones offset to sd[bcd])
    doas pvs -a
      PV             VG       Fmt  Attr PSize   PFree
      /dev/sda       vgraid   lvm2 a--   <5.46t    0
      /dev/sdb       vgraid   lvm2 a--   <5.46t    0
      /dev/sdc       vgraid   lvm2 a--   <5.46t    0
      /dev/sdd       vgraid   lvm2 a--   <5.46t    0
    2- I added the PV to the volume group using vgextend vgraid /dev/sda
    doas vgs -a
      VG       #PV #LV #SN Attr   VSize   VFree
      vgraid     4   1   0 wz--n-  21.83t    0

    3- I used the lvconvert command to add the PV to the LV lvconvert --stripes 3 /dev/vgraid/lvraid
     doas lvs -a
      lvraid                       vgraid   rwi-aor---  16.37t      100.00         
      [lvraid_rimage_0]            vgraid   iwi-aor---  <5.46t
      [lvraid_rimage_1]            vgraid   iwi-aor---  <5.46t
      [lvraid_rimage_2]            vgraid   iwi-aor---  <5.46t
      [lvraid_rimage_3]            vgraid   Iwi-aor---  <5.46t
      [lvraid_rmeta_0]             vgraid   ewi-aor---   4.00m
      [lvraid_rmeta_1]             vgraid   ewi-aor---   4.00m
      [lvraid_rmeta_2]             vgraid   ewi-aor---   4.00m
      [lvraid_rmeta_3]             vgraid   ewi-aor---   4.00m       

    Now, if I remember this right, I ran the lvchange --syncaction check /dev/vgraid/lvraid
    command, waited for almost a day for the sync to complete, then lvchange --rebuild /dev/sda /dev/vgraid/lvraid command.

    One strange thing I noticed is that the `blkid` command doesn't show my LV anymore, and I cannot mount it from fstab using the UUID. I can mount it using the device name, however (mount /dev/vgraid/lvraid /mnt/raid), and that works.

    At this point, I am considering transfering all my data to another volume, and re-creating the RAID using mdadm.

    Here's some more info on my VG and LV :
    doas vgdisplay /dev/vgraid
      --- Volume group ---
      VG Name               vgraid
      System ID            
      Format                lvm2
      Metadata Areas        4
      Metadata Sequence No  7
      VG Access             read/write
      VG Status             resizable
      MAX LV                0
      Cur LV                1
      Open LV               1
      Max PV                0
      Cur PV                4
      Act PV                4
      VG Size               21.83 TiB
      PE Size               4.00 MiB
      Total PE              5723164
      Alloc PE / Size       5723164 / 21.83 TiB
      Free  PE / Size       0 / 0  
      VG UUID               y8U06D-V0ZF-90MK-dhS6-szZf-7qzx-yErLF2

    doas lvdisplay /dev/vgraid/lvraid
      --- Logical volume ---
      LV Path                /dev/vgraid/lvraid
      LV Name                lvraid
      VG Name                vgraid
      LV UUID                73wJt0-E6Ni-rujY-9tRm-QsoF-8FPy-3c10Rg   LV Write Access        read/write
      LV Creation host, time gentoo, 2021-12-02 10:12:48 -0500
      LV Status              available
      # open                 1
      LV Size                16.37 TiB
      Current LE             4292370
      Segments               1
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     1024
      Block device           253:8

    Julien



    Feb 5, 2022, 14:09 by antlists@youngman.org.uk:

    On 05/02/2022 17:43, Julien Roy wrote:

    Hello,

    I've been running an LVM RAID 5 on my home lab for a while, and recently it's been getting awfully close to 100% full, so I decided to buy a new drive to add to it, however, growing an LVM RAID is more complicated than I thought! I found very few
    documentation on how to do this, and settled on following some user's notes on the Arch Wiki [0]. I should've used mdadm !...
    My RAID 5 consisted of 3x6TB drives giving me a total of 12TB of usable space. I am trying to grow it to 18TB now (4x6TB -1 for parity).
    I seem to have done everything in order since I can see all 4 drives are used when I run the vgdisplay command, and lvdisplay tells me that there is 16.37TB of usable space in the logical volume.
    In fact, running fdisk -l on the lv confirms this as well :
    Disk /dev/vgraid/lvraid: 16.37 TiB


    If you'd been running mdadm I'd have been able to help ... my setup is ext4 over lvm over md-raid over dm-integrity over hardware...

    But you've made no mention of lvgrow or whatever it's called. Not using lv-raid, I don't know whether you put ext straight on top of the raid, or do you need to grow the lv volume after you've grown the raid? I know I'd have to grow the volume.

    Cheers,
    Wol



    <html>
    <head>
    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
    </head>
    <body>
    <div>I'm running ext4 over the logical volume over hardware<br></div><div dir="auto"><br></div><div dir="auto">The steps I used to grow the logical volume are as follows:<br></div><div dir="auto">1- I created a physical volume on the disk using <span
    style="" class=""><span class="font" style="font-family:monospace, sans-serif">pvcreate /dev/sda</span><span class="font" style="font-family:sans-serif, sans-serif"> (the new disk became sda and the other ones offset to sd[bcd])</span></span><br></div><
    div dir="auto"><span style="" class=""><span class="font" style="font-family:monospace, sans-serif"><span style="" class=""><span class="size" style="font-size:11px">doas pvs -a</span></span></span></span><br></div><div dir="auto"><span style="" class="">
    <span class="font" style="font-family:monospace, sans-serif"><span style="" class=""><span class="size" style="font-size:11px">&nbsp; PV&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; VG&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Fmt&
    nbsp; Attr PSize&nbsp;&nbsp; PFree</span></span></span></span><br></div><div dir="auto"><span style="" class=""><span class="font" style="font-family:monospace, sans-serif"><span style="" class=""><span class="size" style="font-size:11px">&nbsp; /dev/sda&
    nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; vgraid&nbsp;&nbsp; lvm2 a--&nbsp;&nbsp; &lt;5.46t&nbsp;&nbsp;&nbsp; 0</span></span></span></span><br></div><div dir="auto"><span style="" class=""><span class="font" style="font-family:monospace, sans-serif"><span
    style="" class=""><span class="size" style="font-size:11px">&nbsp; /dev/sdb&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; vgraid&nbsp;&nbsp; lvm2 a--&nbsp;&nbsp; &lt;5.46t&nbsp;&nbsp;&nbsp; 0</span></span></span></span><br></div><div dir="auto"><span style=""
    class=""><span class="font" style="font-family:monospace, sans-serif"><span style="" class=""><span class="size" style="font-size:11px">&nbsp; /dev/sdc&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; vgraid&nbsp;&nbsp; lvm2 a--&nbsp;&nbsp; &lt;5.46t&nbsp;&nbsp;&nbsp;
    0</span></span></span></span><br></div><div dir="auto"><span style="" class=""><span class="font" style="font-family:monospace, sans-serif"><span style="" class=""><span class="size" style="font-size:11px">&nbsp; /dev/sdd&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&
    nbsp; vgraid&nbsp;&nbsp; lvm2 a--&nbsp;&nbsp; &lt;5.46t&nbsp;&nbsp;&nbsp; 0</span></span></span></span><br></div><div dir="auto">2- I added the PV to the volume group using <span style="" class=""><span class="font" style="font-family:monospace, sans-
    serif">vgextend vgraid /dev/sda</span></span><br></div><div><span style="" class=""><span class="font" style="font-family:monospace, sans-serif"><span style="" class=""><span class="size" style="font-size:12px">doas vgs -a</span></span></span></span><br><
    /div><div dir="auto"><span style="" class=""><span class="font" style="font-family:monospace, sans-serif"><span style="" class=""><span class="size" style="font-size:12px">&nbsp; VG&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; #PV #LV #SN Attr&nbsp;&nbsp; VSize&
    nbsp;&nbsp; VFree</span></span></span></span><br></div><div dir="auto"><span style="" class=""><span class="font" style="font-family:monospace, sans-serif"><span style="" class=""><span class="size" style="font-size:12px">&nbsp; vgraid&nbsp;&nbsp;&nbsp;&
    nbsp; 4&nbsp;&nbsp; 1&nbsp;&nbsp; 0 wz--n-&nbsp; 21.83t&nbsp;&nbsp;&nbsp; 0</span></span></span></span><br></div><div dir="auto"><br></div><div dir="auto">3- I used the lvconvert command to add the PV to the LV&nbsp;<span style="" class=""><span class="
    font" style="font-family:monospace, sans-serif">lvconvert --stripes 3 /dev/vgraid/lvraid</span></span><br></div><div dir="auto"><span style="" class=""><span class="font" style="font-family:monospace, sans-serif"><span style="" class=""><span class="size"
    style="font-size:12px">&nbsp;doas lvs -a</span></span></span></span><br></div><div dir="auto"><span style="" class=""><span class="font" style="font-family:monospace, sans-serif"><span style="" class=""><span class="size" style="font-size:12px">&nbsp;
    lvraid&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; vgraid&nbsp;&nbsp; rwi-aor---&nbsp; 16.37t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;100.00&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&
    nbsp;&nbsp;&nbsp;&nbsp;</span></span></span></span><br></div><div dir="auto"><span style="" class=""><span class="font" style="font-family:monospace, sans-serif"><span style="" class=""><span class="size" style="font-size:12px">&nbsp; [lvraid_rimage_0]&
    nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; vgraid&nbsp;&nbsp; iwi-aor---&nbsp; &lt;5.46t</span></span></span></span><br></div><div dir="auto"><span style="" class=""><span class="font" style="font-family:monospace, sans-serif"><
    span style="" class=""><span class="size" style="font-size:12px">&nbsp; [lvraid_rimage_1]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; vgraid&nbsp;&nbsp; iwi-aor---&nbsp; &lt;5.46t</span></span></span></span><br></div><div dir="auto">
    <span style="" class=""><span class="font" style="font-family:monospace, sans-serif"><span style="" class=""><span class="size" style="font-size:12px">&nbsp; [lvraid_rimage_2]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; vgraid&nbsp;&
    nbsp; iwi-aor---&nbsp; &lt;5.46t</span></span></span></span><br></div><div dir="auto"><span style="" class=""><span class="font" style="font-family:monospace, sans-serif"><span style="" class=""><span class="size" style="font-size:12px">&nbsp; [lvraid_
    rimage_3]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; vgraid&nbsp;&nbsp; Iwi-aor---&nbsp; &lt;5.46t</span></span></span></span><br></div><div dir="auto"><span style="" class=""><span class="font" style="font-family:monospace, sans-
    serif"><span style="" class=""><span class="size" style="font-size:12px">&nbsp; [lvraid_rmeta_0]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; vgraid&nbsp;&nbsp; ewi-aor---&nbsp;&nbsp; 4.00m</span></span></span></span><br></div><
    div dir="auto"><span style="" class=""><span class="font" style="font-family:monospace, sans-serif"><span style="" class=""><span class="size" style="font-size:12px">&nbsp; [lvraid_rmeta_1]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
    &nbsp; vgraid&nbsp;&nbsp; ewi-aor---&nbsp;&nbsp; 4.00m</span></span></span></span><br></div><div dir="auto"><span style="" class=""><span class="font" style="font-family:monospace, sans-serif"><span style="" class=""><span class="size" style="font-size:
    12px">&nbsp; [lvraid_rmeta_2]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; vgraid&nbsp;&nbsp; ewi-aor---&nbsp;&nbsp; 4.00m</span></span></span></span><br></div><div dir="auto"><span style="" class=""><span class="font" style="
    font-family:monospace, sans-serif"><span style="" class=""><span class="size" style="font-size:12px">&nbsp; [lvraid_rmeta_3]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; vgraid&nbsp;&nbsp; ewi-aor---&nbsp;&nbsp; 4.00m&nbsp;&
    nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></span></span></span><br></div><div dir="auto"><br></div><div dir="auto">Now, if I remember this right, I ran the&nbsp;<span style="" class=""><span class="font" style="font-family:monospace, sans-serif">lvchange
    --syncaction check /dev/vgraid/lvraid</span></span><br></div><div dir="auto"><span style="" class=""><span class="font" style="font-family:sans-serif, sans-serif">command, waited for almost a day for the sync to complete, then </span><span class="font"
    style="font-family:monospace, sans-serif">lvchange --rebuild /dev/sda /dev/vgraid/lvraid</span><span class="font" style="font-family:sans-serif, sans-serif"> command.</span></span><br></div><div dir="auto"><br></div><div dir="auto"><span style="" class=""
    <span class="font" style="font-family:sans-serif, sans-serif">One strange thing I noticed is that the `blkid` command doesn't show my LV anymore, and I cannot mount it from fstab using the UUID. I can mount it using the device name, however (mount /dev/
    vgraid/lvraid /mnt/raid), and that works.</span></span><br></div><div dir="auto"><br></div><div dir="auto"><span style="" class=""><span class="font" style="font-family:sans-serif, sans-serif">At this point, I am considering transfering all my data to
    another volume, and re-creating the RAID using mdadm.</span></span><br></div><div dir="auto"><br></div><div dir="auto">Here's some more info on my VG and LV :<br></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span
    class="size" style="font-size: 12px">doas vgdisplay /dev/vgraid<br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; --- Volume group ---<br></span></span><
    /div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; VG Name&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; vgraid<br></span></span></
    <div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; System ID&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br></span></span></div><div dir="auto"><
    span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; Format&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; lvm2<br></span></span></div><div dir="auto"><
    span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; Metadata Areas&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4<br></span></span></div><div dir="auto"><span class="font" style="font-family:
    monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; Metadata Sequence No&nbsp; 7<br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp;
    VG Access&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; read/write<br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; VG Status&
    nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; resizable<br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; MAX LV&nbsp;&nbsp;&
    nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0<br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; Cur LV&nbsp;&nbsp;&
    nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1<br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; Open LV&nbsp;&nbsp;&
    nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1<br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; Max PV&nbsp;&nbsp;&nbsp;&
    nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0<br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; Cur PV&nbsp;&nbsp;&nbsp;&
    nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4<br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; Act PV&nbsp;&nbsp;&nbsp;&
    nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4<br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; VG Size&nbsp;&nbsp;&nbsp;&
    nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 21.83 TiB<br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; PE Size&nbsp;&nbsp;&nbsp;&
    nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4.00 MiB<br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; Total PE&nbsp;&nbsp;&nbsp;&
    nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 5723164<br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; Alloc PE / Size&nbsp;&nbsp;&nbsp;&
    nbsp;&nbsp;&nbsp; 5723164 / 21.83 TiB<br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; Free&nbsp; PE / Size&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 / 0&
    nbsp;&nbsp;<br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; VG UUID&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
    y8U06D-V0ZF-90MK-dhS6-szZf-7qzx-yErLF2</span></span><br></div><div dir="auto"><br></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">doas lvdisplay /dev/vgraid/lvraid</span><
    span class="size" style="font-size: 12px"><br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; --- Logical volume ---</span><span class="size" style="font-
    size: 12px"><br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; LV Path&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&
    nbsp;&nbsp; /dev/vgraid/lvraid</span><span class="size" style="font-size: 12px"><br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; LV Name&nbsp;&nbsp;&
    nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; lvraid</span><span class="size" style="font-size: 12px"><br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size"
    style="font-size: 12px">&nbsp; VG Name&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; vgraid</span><span class="size" style="font-size: 12px"><br></span></span></div><div dir="auto"><span class="font" style="
    font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; LV UUID&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 73wJt0-E6Ni-rujY-9tRm-QsoF-8FPy-3c10Rg</span><span class="size" style=
    "font-size: 12px"><br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; LV Write Access&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; read/write</span><span
    class="size" style="font-size: 12px"><br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; LV Creation host, time gentoo, 2021-12-02 10:12:48 -0500</span><
    span class="size" style="font-size: 12px"><br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; LV Status&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&
    nbsp;&nbsp;&nbsp;&nbsp;&nbsp; available</span><span class="size" style="font-size: 12px"><br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; # open&nbsp;&
    nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1</span><span class="size" style="font-size: 12px"><br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span
    class="size" style="font-size: 12px">&nbsp; LV Size&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 16.37 TiB</span><span class="size" style="font-size: 12px"><br></span></span></div><div dir="auto"><span class="
    font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; Current LE&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4292370</span><span class="size" style="font-size: 12px"><br></span></
    span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; Segments&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1</span><span class="
    size" style="font-size: 12px"><br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; Allocation&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&
    nbsp;&nbsp; inherit</span><span class="size" style="font-size: 12px"><br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; Read ahead sectors&nbsp;&nbsp;&
    nbsp;&nbsp; auto</span><span class="size" style="font-size: 12px"><br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; - currently set to&nbsp;&nbsp;&nbsp;
    &nbsp; 1024</span><span class="size" style="font-size: 12px"><br></span></span></div><div dir="auto"><span class="font" style="font-family: monospace, sans-serif;"><span class="size" style="font-size: 12px">&nbsp; Block device&nbsp;&nbsp;&nbsp;&nbsp;&
    nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 253:8</span></span><br></div><div dir="auto"><br></div><div dir="auto">Julien<br></div><div><br></div><div><br></div><div><br></div><div>Feb 5, 2022, 14:09 by antlists@youngman.org.uk:<br></div><blockquote class="
    tutanota_quote" style="border-left: 1px solid #93A3B8; padding-left: 10px; margin-left: 5px;"><div>On 05/02/2022 17:43, Julien Roy wrote:<br></div><blockquote><div>Hello,<br></div><div><br></div><div>I've been running an LVM RAID 5 on my home lab for a
    while, and recently it's been getting awfully close to 100% full, so I decided to buy a new drive to add to it, however, growing an LVM RAID is more complicated than I thought! I found very few documentation on how to do this, and settled on following
    some user's notes on the Arch Wiki [0]. I should've used mdadm !...<br></div><div>My RAID 5 consisted of 3x6TB drives giving me a total of 12TB of usable space. I am trying to grow it to 18TB now (4x6TB -1 for parity).<br></div><div>I seem to have done
    everything in order since I can see all 4 drives are used when I run the vgdisplay command, and lvdisplay tells me that there is 16.37TB of usable space in the logical volume.<br></div><div>In fact, running fdisk -l on the lv confirms this as well :<br></
    <div>Disk /dev/vgraid/lvraid: 16.37 TiB<br></div></blockquote><div><br></div><div>If you'd been running mdadm I'd have been able to help ... my setup is ext4 over lvm over md-raid over dm-integrity over hardware...<br></div><div><br></div><div>But
    you've made no mention of lvgrow or whatever it's called. Not using lv-raid, I don't know whether you put ext straight on top of the raid, or do you need to grow the lv volume after you've grown the raid? I know I'd have to grow the volume.<br></div><div>
    <br></div><div>Cheers,<br></div><div>Wol<br></div></blockquote><div dir="auto"><br></div> </body>
    </html>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Wols Lists@21:1/5 to Julien Roy on Sat Feb 5 23:10:04 2022
    On 05/02/2022 19:37, Julien Roy wrote:
    At this point, I am considering transfering all my data to another
    volume, and re-creating the RAID using mdadm.

    You know about the raid wiki
    https://raid.wiki.kernel.org/index.php/Linux_Raid ?

    (Edited by yours truly ...)

    Cheers,
    Wol

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Julien Roy@21:1/5 to All on Sat Feb 5 23:20:01 2022
    I didn't - I typically use the Gentoo and Arch wiki when I need information, but will keep that in mind.
    I noticed, on that page, that there's a big bold warning about using post-2019 WD Red drives. Sadly, that's exactly what I am doing, my array is 4xWD60EFAX. I don't know whether that's the cause of the problem. It does say on the wiki that these drives
    can't be added to existing arrays, so it would make sense. Oh well, lesson learned.

    Right now, I am trying to move my data to another volume I have. I don't have another 12TB volume, so instead I am trying to compress the data so it fits on my other volume. Not sure how well that'll work.

    Julien



    Feb 5, 2022, 17:02 by antlists@youngman.org.uk:

    On 05/02/2022 19:37, Julien Roy wrote:

    At this point, I am considering transfering all my data to another volume, and re-creating the RAID using mdadm.


    You know about the raid wiki https://raid.wiki.kernel.org/index.php/Linux_Raid ?

    (Edited by yours truly ...)

    Cheers,
    Wol



    <html>
    <head>
    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
    </head>
    <body>
    <div>I didn't - I typically use the Gentoo and Arch wiki when I need information, but will keep that in mind.<br></div><div dir="auto">I noticed, on that page, that there's a big bold warning about using post-2019 WD Red drives. Sadly, that's exactly
    what I am doing, my array is 4xWD60EFAX. I don't know whether that's the cause of the problem. It does say on the wiki that these drives can't be added to existing arrays, so it would make sense. Oh well, lesson learned.<br></div><div dir="auto"><br></
    <div dir="auto">Right now, I am trying to move my data to another volume I have. I don't have another 12TB volume, so instead I am trying to compress the data so it fits on my other volume. Not sure how well that'll work.<br></div><div dir="auto"><br>
    </div><div>Julien<br></div><div><br></div><div><br></div><div><br></div><div>Feb 5, 2022, 17:02 by antlists@youngman.org.uk:<br></div><blockquote class="tutanota_quote" style="border-left: 1px solid #93A3B8; padding-left: 10px; margin-left: 5px;"><div>On
    05/02/2022 19:37, Julien Roy wrote:<br></div><blockquote>At this point, I am considering transfering all my data to another volume, and re-creating the RAID using mdadm.<br></blockquote><div><br></div><div>You know about the raid wiki https://raid.wiki.
    kernel.org/index.php/Linux_Raid ?<br></div><div><br></div><div>(Edited by yours truly ...)<br></div><div><br></div><div>Cheers,<br></div><div>Wol<br></div></blockquote><div dir="auto"><br></div> </body>
    </html>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Wol@21:1/5 to Julien Roy on Sun Feb 6 00:10:02 2022
    On 05/02/2022 22:16, Julien Roy wrote:
    I didn't - I typically use the Gentoo and Arch wiki when I need
    information, but will keep that in mind.
    I noticed, on that page, that there's a big bold warning about using post-2019 WD Red drives. Sadly, that's exactly what I am doing, my array
    is 4xWD60EFAX. I don't know whether that's the cause of the problem. It
    does say on the wiki that these drives can't be added to existing
    arrays, so it would make sense. Oh well, lesson learned.

    Ouch. EFAX drives are the new SMR version it seems. You might have
    been lucky, it might have added okay.

    The problem with these drives, basically, is you cannot stream data to
    them. They'll accept so much, fill up their CMR buffers, and then stall
    while they do an internal re-organisation. And by the time they start responding again, the OS thinks the drive has failed ...

    I've just bought a Toshiba N300 8TB for £165 as my backup drive. As far
    as I know that's an okay drive for raid - I haven't heard any bad
    stories about SMR being sneaked in ... I've basically split it in 2, 3TB
    as a spare partition for my raid, and 5TB as backup for my 6TB (3x3)
    raid array.

    Look at creating a raid-10 from your WDs, or if you create a new raid-5
    array from scratch using --assume-clean then format it, you're probably
    okay. Replacing SMRs with CMRs will probably work fine so if one of your
    WDs fail, you should be okay replacing it, so long as it's not another
    SMR :-) (If you do a scrub, expects loads of parity errors first time
    :-) but you will probably get away with it if you're careful.

    Cheers,
    Wol

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Knecht@21:1/5 to julien@jroy.ca on Sun Feb 6 01:50:02 2022
    If it's a WD Red Plus on the label then it's CMR and good. If it's WD
    Red without the "Plus" then it's SMR and WD has said don't use them
    for this purpose. It's not impossible to run the WD Red in a RAID, but
    they tend to fail when resilvering. If it resilvers correctly then it
    will probably be OK at least in the short term but you should consider
    getting a couple of Red Plus and having them on hand if the plain WD
    Red goes bad.

    HTH,
    Mark

    On Sat, Feb 5, 2022 at 5:38 PM Julien Roy <julien@jroy.ca> wrote:

    Thanks - the drives are new from this year, so I don't think they'll fail any time soon.
    Considering that the WD60EFAX is advertised as "RAID compatible", what's for sure is that my next drives won't be WD. CMR *or* SMR...

    Feb 5, 2022, 18:04 by antlists@youngman.org.uk:

    Ouch. EFAX drives are the new SMR version it seems. You might have been lucky, it might have added okay.

    The problem with these drives, basically, is you cannot stream data to them. They'll accept so much, fill up their CMR buffers, and then stall while they do an internal re-organisation. And by the time they start responding again, the OS thinks the
    drive has failed ...

    I've just bought a Toshiba N300 8TB for £165 as my backup drive. As far as I know that's an okay drive for raid - I haven't heard any bad stories about SMR being sneaked in ... I've basically split it in 2, 3TB as a spare partition for my raid, and
    5TB as backup for my 6TB (3x3) raid array.

    Look at creating a raid-10 from your WDs, or if you create a new raid-5 array from scratch using --assume-clean then format it, you're probably okay. Replacing SMRs with CMRs will probably work fine so if one of your WDs fail, you should be okay
    replacing it, so long as it's not another SMR :-) (If you do a scrub, expects loads of parity errors first time :-) but you will probably get away with it if you're careful.

    Cheers,
    Wol



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Julien Roy@21:1/5 to All on Sun Feb 6 01:40:01 2022
    Thanks - the drives are new from this year, so I don't think they'll fail any time soon.
    Considering that the WD60EFAX is advertised as "RAID compatible", what's for sure is that my next drives won't be WD. CMR *or* SMR...

    Feb 5, 2022, 18:04 by antlists@youngman.org.uk:

    Ouch. EFAX drives are the new SMR version it seems. You might have been lucky, it might have added okay.

    The problem with these drives, basically, is you cannot stream data to them. They'll accept so much, fill up their CMR buffers, and then stall while they do an internal re-organisation. And by the time they start responding again, the OS thinks the
    drive has failed ...

    I've just bought a Toshiba N300 8TB for £165 as my backup drive. As far as I know that's an okay drive for raid - I haven't heard any bad stories about SMR being sneaked in ... I've basically split it in 2, 3TB as a spare partition for my raid, and
    5TB as backup for my 6TB (3x3) raid array.

    Look at creating a raid-10 from your WDs, or if you create a new raid-5 array from scratch using --assume-clean then format it, you're probably okay. Replacing SMRs with CMRs will probably work fine so if one of your WDs fail, you should be okay
    replacing it, so long as it's not another SMR :-) (If you do a scrub, expects loads of parity errors first time :-) but you will probably get away with it if you're careful.

    Cheers,
    Wol



    <html>
    <head>
    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
    </head>
    <body>
    <div>Thanks - the drives are new from this year, so I don't think they'll fail any time soon. <br></div><div dir="auto">Considering that the WD60EFAX is advertised as "RAID compatible", what's for sure is that my next drives won't be WD. CMR *or* SMR...<
    </div><div dir="auto"><br></div><div dir="auto">Feb 5, 2022, 18:04 by <a href="mailto:antlists@youngman.org.uk">antlists@youngman.org.uk</a>:<br></div><blockquote class="tutanota_quote" style="border-left: 1px solid #93A3B8; padding-left: 10px; margin-
    left: 5px;"><div>Ouch. EFAX drives are the new SMR version it seems. You might have been lucky, it might have added okay.<br></div><div><br></div><div>The problem with these drives, basically, is you cannot stream data to them. They'll accept so much,
    fill up their CMR buffers, and then stall while they do an internal re-organisation. And by the time they start responding again, the OS thinks the drive has failed ...<br></div><div><br></div><div>I've just bought a Toshiba N300 8TB for £165 as my
    backup drive. As far as I know that's an okay drive for raid - I haven't heard any bad stories about SMR being sneaked in ... I've basically split it in 2, 3TB as a spare partition for my raid, and 5TB as backup for my 6TB (3x3) raid array.<br></div><div>
    <br></div><div>Look at creating a raid-10 from your WDs, or if you create a new raid-5 array from scratch using --assume-clean then format it, you're probably okay. Replacing SMRs with CMRs will probably work fine so if one of your WDs fail, you should
    be okay replacing it, so long as it's not another SMR :-) (If you do a scrub, expects loads of parity errors first time :-) but you will probably get away with it if you're careful.<br></div><div><br></div><div>Cheers,<br></div><div>Wol<br></div></
    blockquote><div dir="auto"><br></div> </body>
    </html>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Wols Lists@21:1/5 to Mark Knecht on Sun Feb 6 09:20:01 2022
    On 06/02/2022 00:47, Mark Knecht wrote:
    If it's a WD Red Plus on the label then it's CMR and good. If it's WD
    Red without the "Plus" then it's SMR and WD has said don't use them
    for this purpose. It's not impossible to run the WD Red in a RAID, but
    they tend to fail when resilvering. If it resilvers correctly then it
    will probably be OK at least in the short term but you should consider getting a couple of Red Plus and having them on hand if the plain WD
    Red goes bad.

    Avoid WD ...

    I've got two 4TB Seagate Ironwolves and a 8TB Toshiba N300.

    I've also got two 3TB Barracudas, but they're quite old and I didn't
    know they were a bad choice for raid. From what I can make out, Seagate
    has now split the Barracuda line in two, and you have the BarraCuda (all
    SMR) and FireCuda (all CMR) aimed at the desktop niche. So you might
    well be okay with a FireCuda but neither Seagate nor us raid guys would recommend it.

    Cheers,
    Wol

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)