• XFS problems

    From Doug Laidlaw@2:250/1 to All on Mon Jan 10 10:43:59 2022
    I have been saving my backups to an XFS partition on an external Seagate drive. I tried to do it with Ext4, but had too many failures.

    For the past month or so, we have been experiencing very severe storms,
    some with lightning. After the last one, my external disk became inaccessible. Looking around the Web, I find that this was a regular occurrence with CentOS 7, whose default filesystem was XFS. Usually,
    running xfs_repair is all that is needed, but this time, no primary or secondary superblocks can be found, and xfs_repair gives up. The
    computer does not now recognize the device.

    Are there any other steps I can take? btrfs is suggested as ideal for
    backups, but it is largely incompatible with any other filesystem, and
    it is said to be overkill for a single workstation. In any case, that
    is a question that arises only if I have to buy a new drive.

    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: Aioe.org NNTP Server (2:250/1@fidonet)
  • From Gilberto F da Silva@2:250/1 to All on Mon Jan 10 13:17:08 2022
    Doug Laidlaw escreveu:
    Are there any other steps I can take? btrfs is suggested as ideal for backups, but it is largely incompatible with any other filesystem, and
    it is said to be overkill for a single workstation.  In any case, that
    is a question that arises only if I have to buy a new drive.

    External discs are fragile devices. I've heard from several people
    complaints about it. One of my external discs stopped working suddenly.
    I was able to recover some data using
    https://www.cgsecurity.org/wiki/testdisk


    --

    Giilberto F da Silva

    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: Aioe.org NNTP Server (2:250/1@fidonet)
  • From Gilberto F da Silva@2:250/1 to All on Mon Jan 10 13:35:06 2022
    Doug Laidlaw escreveu:
    I have been saving my backups to an XFS partition on an external Seagate drive.  I tried to do it with Ext4, but had too many failures.

    This does not make sense in my head. Use a different file system tool to
    repair your XFS partition.


    --

    Giilberto F da Silva

    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: Aioe.org NNTP Server (2:250/1@fidonet)
  • From Doug Laidlaw@2:250/1 to All on Mon Jan 10 14:01:12 2022
    On 11/1/22 00:35, Gilberto F da Silva wrote:
    Doug Laidlaw escreveu:
    I have been saving my backups to an XFS partition on an external Seagate
    drive.  I tried to do it with Ext4, but had too many failures.

    This does not make sense in my head. Use a different file system tool to repair your XFS partition.


    I need to rephrase that. My initial filesystem for this partition was
    Ext4. Then I switched to XFS. fsck cannot handle XFS, although it is suggested that running fsck on it on boot-up is useful. XFS has is own
    tool, xfs_repair, but as its very name suggests, it is a tool for
    repairs, not for ordinary checks on boot-up. I have used it in the
    past, but this time, the damage was too great.

    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: Aioe.org NNTP Server (2:250/1@fidonet)
  • From Doug Laidlaw@2:250/1 to All on Mon Jan 10 14:06:21 2022
    On 11/1/22 00:17, Gilberto F da Silva wrote:
    Doug Laidlaw escreveu:
    Are there any other steps I can take? btrfs is suggested as ideal for
    backups, but it is largely incompatible with any other filesystem, and
    it is said to be overkill for a single workstation.  In any case, that
    is a question that arises only if I have to buy a new drive.

    External discs are fragile devices. I've heard from several people
    complaints about it. One of my external discs stopped working suddenly.
    I was able to recover some data using https://www.cgsecurity.org/wiki/testdisk


    Yes, I used testdisk years ago. Regarding an alternative, my computer
    has provision for installing 2 inch drives. These drives have been
    popular with the Raspberry Pi, because of their small size. I have no experience of them, but they seem to be solid-state and usable anywhere.

    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: Aioe.org NNTP Server (2:250/1@fidonet)
  • From TJ@2:250/1 to All on Mon Jan 10 14:58:10 2022
    On 1/10/22 05:43, Doug Laidlaw wrote:
    I have been saving my backups to an XFS partition on an external Seagate drive.  I tried to do it with Ext4, but had too many failures.

    For the past month or so, we have been experiencing very severe storms,
    some with lightning.  After the last one, my external disk became inaccessible.  Looking around the Web, I find that this was a regular occurrence with CentOS 7, whose default filesystem was XFS.  Usually, running xfs_repair is all that is needed, but this time, no primary or secondary superblocks can be found, and xfs_repair gives up.  The
    computer does not now recognize the device.

    Are there any other steps I can take? btrfs is suggested as ideal for backups, but it is largely incompatible with any other filesystem, and
    it is said to be overkill for a single workstation.  In any case, that
    is a question that arises only if I have to buy a new drive.

    A backup system that isn't reliable isn't much of a backup system.

    If I were in your shoes, even if I could find a way to "repair" the
    drive and get it working, I would never be able to trust it again. I'd
    buy a new drive. In fact, I'd probably buy two, so I had a backup for my backup.

    Regarding the file system, I use ext4 just about everywhere. Exceptions
    are on flash drives or memory cards that are used between Linux and
    other systems, where the job requires FAT32, NTFS or EXFAT. I've had a
    few failures over the years, but none where I would assign blame to the
    file system.

    A very few of my failures were hardware-related, heads crashing, bad
    cable, lightning, etc. But the vast majority of my failures were my own
    damn fault, caused by me doing something stupid when I knew better.

    TJ

    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From Gilberto F da Silva@2:250/1 to All on Mon Jan 10 15:05:15 2022
    Doug Laidlaw wrote:
    I have been saving my backups to an XFS partition on an external Seagate drive.  I tried to do it with Ext4, but had too many failures.

    For the past month or so, we have been experiencing very severe storms,
    some with lightning.  After the last one, my external disk became inaccessible.  Looking around the Web, I find that this was a regular occurrence with CentOS 7, whose default filesystem was XFS.  Usually, running xfs_repair is all that is needed, but this time, no primary or secondary superblocks can be found, and xfs_repair gives up.  The
    computer does not now recognize the device.

    Are there any other steps I can take? btrfs is suggested as ideal for backups, but it is largely incompatible with any other filesystem, and
    it is said to be overkill for a single workstation.  In any case, that
    is a question that arises only if I have to buy a new drive.

    Regardless of the size of the disk, we can always fill them. But the
    amount of original data is small. An account on a virtual disk such as GoogleDrive or MEGA can avoid stress of a loss of data. Mageia can have
    a folder with automatic synchronization with MEGA.


    --

    Giilberto F da Silva

    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: Aioe.org NNTP Server (2:250/1@fidonet)
  • From Bobbie Sellers@2:250/1 to All on Mon Jan 10 15:47:54 2022
    On 1/10/22 06:01, Doug Laidlaw wrote:
    On 11/1/22 00:35, Gilberto F da Silva wrote:
    Doug Laidlaw escreveu:
    I have been saving my backups to an XFS partition on an external Seagate >>> drive.  I tried to do it with Ext4, but had too many failures.

    This does not make sense in my head. Use a different file system tool to
    repair your XFS partition.


    I need to rephrase that.  My initial filesystem for this partition was Ext4.  Then I switched to XFS.  fsck cannot handle XFS, although it is suggested that running fsck on it on boot-up is useful.  XFS has is own tool, xfs_repair, but as its very name suggests, it is a tool for
    repairs, not for ordinary checks on boot-up.  I have used it in the
    past, but this time, the damage was too great.

    Regarding the reliability of external disks.
    Every use GSmartControl?
    I used to determine the age and reliability of internal and External
    disks. My external disks are much more reliable than the old disks
    (all now replaced but one)on my used machines. And I had no problems
    until after the middle of November 2021 with my backups via Timeshift.

    I don't know if Mageia has this valuable tool or not.


    bliss - brought to you by the power and ease of PCLinuxOS
    and a minor case of hypergraphia

    --
    bliss dash SF 4 ever at dslextreme dot com

    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: dis-organization (2:250/1@fidonet)
  • From Aragorn@2:250/1 to All on Mon Jan 10 17:17:00 2022
    On 10.01.2022 at 21:43, Doug Laidlaw scribbled:

    btrfs is suggested as ideal for backups, but it is largely
    incompatible with any other filesystem [...

    What exactly does that mean, "incompatible with any other filesystem"?

    btrfs IS a filesystem, and you create it just as you would create an
    ext4, XFS, JFS or reiserfs filesystem.

    ...] and it is said to be overkill for a single workstation.

    It has features that you probably wouldn't use, but that doesn't make
    it a bad choice. For most part, it is a self-healing filesystem, it
    uses CRC32 checksums, and it is very fast.

    I am running btrfs as the filesystem for most of my partitions here,
    and have been for close to three years, without any problems. I don't
    use subvolumes =E2=80=94 I still use traditional (GPT) partitions =E2=80=94=
    but I use
    it on both the SSD with all of my system partitions and on the HDD that
    holds my Timeshift backups. (They are rsync'ed backups, by the way,
    not btrfs snapshots.)

    --=20
    With respect,
    =3D Aragorn =3D


    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Strider (2:250/1@fidonet)
  • From Bit Twister@2:250/1 to All on Mon Jan 10 17:21:14 2022
    On Mon, 10 Jan 2022 07:47:54 -0800, Bobbie Sellers wrote:

    Regarding the reliability of external disks.
    Every use GSmartControl?
    I used to determine the age and reliability of internal and External
    disks. My external disks are much more reliable than the old disks
    (all now replaced but one)on my used machines. And I had no problems
    until after the middle of November 2021 with my backups via Timeshift.

    I don't know if Mageia has this valuable tool or not.

    Yep. I have it installed and get occasional messages from my smartd cron job. journalctl | grep smartd
    Jan 01 04:18:07 smartd[761]: Device: /dev/sdb, previous self-test completed without error
    Jan 01 10:18:07 smartd[761]: Device: /dev/sdb, SMART Usage Attribute: 195 Hardware_ECC_Recovered changed from 21 to 20
    Jan 03 05:18:07 smartd[761]: Device: /dev/sdb, SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 109 to 110
    Jan 03 05:18:07 smartd[761]: Device: /dev/sdb, SMART Usage Attribute: 195 Hardware_ECC_Recovered changed from 20 to 21
    Jan 03 09:48:07 smartd[761]: Device: /dev/sdb, SMART Usage Attribute: 195 Hardware_ECC_Recovered changed from 21 to 20
    Jan 08 03:48:07 smartd[761]: Device: /dev/sda, previous self-test completed without error
    Jan 08 03:48:07 smartd[761]: Device: /dev/sdb, self-test in progress, 10% remaining
    Jan 08 04:18:07 smartd[761]: Device: /dev/sdb, previous self-test completed without error
    Jan 08 13:48:07 smartd[761]: Device: /dev/sdb, SMART Usage Attribute: 195 Hardware_ECC_Recovered changed from 20 to 19
    Jan 10 01:18:07 smartd[761]: Device: /dev/sdb, SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 110 to 111
    Jan 10 01:18:07 smartd[761]: Device: /dev/sdb, SMART Usage Attribute: 195 Hardware_ECC_Recovered changed from 19 to 20

    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From faeychild@2:250/1 to All on Mon Jan 10 20:32:42 2022
    On 11/1/22 01:58, TJ wrote:

    A very few of my failures were hardware-related, heads crashing, bad
    cable, lightning, etc. But the vast majority of my failures were my own
    damn fault, caused by me doing something stupid when I knew better.


    Yes!! I believe that I also run with your pack :-)

    Regards


    --
    faeychild
    Running plasmashell 5.20.4 on 5.15.11-desktop-3.mga8 kernel.
    Mageia release 8 (Official) for x86_64 installed via Mageia-8-x86_64-DVD.iso


    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From David W. Hodgins@2:250/1 to All on Mon Jan 10 21:06:43 2022
    On Mon, 10 Jan 2022 05:43:59 -0500, Doug Laidlaw <laidlaws@hotkey.net.au> wrote:

    I have been saving my backups to an XFS partition on an external Seagate drive. I tried to do it with Ext4, but had too many failures.

    The biggest problem I've seen in the past with external drives is using an enclosure that isn't well ventilated, causing the electronics to over heat, which will result in problems.

    Regards, Dave Hodgins

    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From red floyd@2:250/1 to All on Tue Jan 11 00:34:37 2022
    On 1/10/2022 1:06 PM, David W. Hodgins wrote:
    On Mon, 10 Jan 2022 05:43:59 -0500, Doug Laidlaw
    <laidlaws@hotkey.net.au> wrote:

    I have been saving my backups to an XFS partition on an external Seagate
    drive.  I tried to do it with Ext4, but had too many failures.

    The biggest problem I've seen in the past with external drives is using an enclosure that isn't well ventilated, causing the electronics to over heat, which will result in problems.

    Regards, Dave Hodgins

    I'm seeing that with external NVME enclosures. So I only plug those in
    when needed, otherwise they get too frickin' hot.


    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From David W. Hodgins@2:250/1 to All on Tue Jan 11 01:27:23 2022
    On Mon, 10 Jan 2022 19:34:37 -0500, red floyd <no.spam.here@its.invalid> wrote:
    I'm seeing that with external NVME enclosures. So I only plug those in
    when needed, otherwise they get too frickin' hot.

    It happens with spinning rust drives too, if they are left plugged into power for
    too long.

    Regards, Dave Hodgins

    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From Doug Laidlaw@2:250/1 to All on Tue Jan 11 14:40:33 2022
    On 11/1/22 12:27, David W. Hodgins wrote:
    On Mon, 10 Jan 2022 19:34:37 -0500, red floyd <no.spam.here@its.invalid> wrote:
    I'm seeing that with external NVME enclosures.  So I only plug those in
    when needed, otherwise they get too frickin' hot.

    It happens with spinning rust drives too, if they are left plugged into power for
    too long.

    Regards, Dave Hodgins

    Good to see you are still around, Bits. I opted for a "WD Blue"
    solid-state drive. Now I am in the usual vicious circle: the device
    needs to be initialized to become visible, and it is not visible, so initialization is impossible. There is a tutorial on YouTube for this
    very device, but it assumes that the drive can be found.

    I hadn't heard of GsmartControl, but my mobo has SMART available.

    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: Aioe.org NNTP Server (2:250/1@fidonet)
  • From Doug Laidlaw@2:250/1 to All on Tue Jan 11 14:54:30 2022
    On 12/1/22 01:40, Doug Laidlaw wrote:
    On 11/1/22 12:27, David W. Hodgins wrote:
    On Mon, 10 Jan 2022 19:34:37 -0500, red floyd
    <no.spam.here@its.invalid> wrote:
    I'm seeing that with external NVME enclosures.  So I only plug those in >>> when needed, otherwise they get too frickin' hot.

    It happens with spinning rust drives too, if they are left plugged
    into power for
    too long.

    Regards, Dave Hodgins

    Good to see you are still around, Bits. I opted for a "WD Blue"
    solid-state drive.  Now I am in the usual vicious circle: the device
    needs to be initialized to become visible, and it is not visible, so initialization is impossible.  There is a tutorial on YouTube for this
    very device, but it assumes that the drive can be found.

    I hadn't heard of GsmartControl, but my mobo has SMART available.

    GsmartControl is in the repos, but it didn't seem to help.


    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: Aioe.org NNTP Server (2:250/1@fidonet)
  • From Bit Twister@2:250/1 to All on Tue Jan 11 15:33:25 2022
    On Wed, 12 Jan 2022 01:40:33 +1100, Doug Laidlaw wrote:

    Good to see you are still around, Bits. I opted for a "WD Blue"
    solid-state drive. Now I am in the usual vicious circle: the device
    needs to be initialized to become visible, and it is not visible, so initialization is impossible.

    If it were me I would get/burn/load the latest systemrescue-8.07-amd64.iso Click gparted in the bottom tray icom and see if gparted can create a disk partition table, a partition and format it.

    https://www.system-rescue.org/
    http://www.sysresccd.org/Download has a link to howto burn
    iso contents to usb if needed. Do not use any other method
    to do so.

    If want you can install Mageia gparted and run it as root.
    Just make sure to pick the correct device.



    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From Bobbie Sellers@2:250/1 to All on Tue Jan 11 16:53:59 2022
    On 1/11/22 06:54, Doug Laidlaw wrote:
    On 12/1/22 01:40, Doug Laidlaw wrote:
    On 11/1/22 12:27, David W. Hodgins wrote:
    On Mon, 10 Jan 2022 19:34:37 -0500, red floyd
    <no.spam.here@its.invalid> wrote:
    I'm seeing that with external NVME enclosures.  So I only plug those in >>>> when needed, otherwise they get too frickin' hot.

    It happens with spinning rust drives too, if they are left plugged
    into power for
    too long.

    Regards, Dave Hodgins

    Good to see you are still around, Bits. I opted for a "WD Blue"
    solid-state drive.  Now I am in the usual vicious circle: the device
    needs to be initialized to become visible, and it is not visible, so
    initialization is impossible.  There is a tutorial on YouTube for this
    very device, but it assumes that the drive can be found.

    I hadn't heard of GsmartControl, but my mobo has SMART available.

    GsmartControl is in the repos, but it didn't seem to help.


    GSmart Control is to determine the condition of your drives.

    Check your cabling and positioning of the drive.

    Have you tired to find it with GPartEd?

    Otherwise you hay have bought or drawn a dud drive,

    bliss - to tired to even say more.

    --
    bliss dash SF 4 ever at dslextreme dot com

    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: dis-organization (2:250/1@fidonet)
  • From Doug Laidlaw@2:250/1 to All on Wed Jan 12 03:15:37 2022
    On 12/1/22 02:33, Bit Twister wrote:
    If it were me I would get/burn/load the latest systemrescue-8.07-amd64.iso Click gparted in the bottom tray icom and see if gparted can create a disk partition table, a partition and format it.

    Did that. I have a recent copy of system rescue. Also tried the one for Mageia. It can't find a drive either. I was thinking that once I have
    a partition table, I would be set, but I can't get that far, even.

    I still have Rescatux, but it seems to be pretty useless. Years ago, I
    had a rescue disk with all the manufacturer's setup tools, but no
    longer. Checking the cables next. A 2.5 inch CD is available for
    laptops, but solid state makes no noise, and has no indicator light. A different SATA data cable that used to run a failed CD, made no difference.

    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: Aioe.org NNTP Server (2:250/1@fidonet)
  • From William Unruh@2:250/1 to All on Wed Jan 12 04:37:57 2022
    On 2022-01-12, Doug Laidlaw <laidlaws@hotkey.net.au> wrote:
    On 12/1/22 02:33, Bit Twister wrote:
    If it were me I would get/burn/load the latest systemrescue-8.07-amd64.iso >> Click gparted in the bottom tray icom and see if gparted can create a disk >> partition table, a partition and format it.

    Did that. I have a recent copy of system rescue. Also tried the one for Mageia. It can't find a drive either. I was thinking that once I have
    a partition table, I would be set, but I can't get that far, even.

    I still have Rescatux, but it seems to be pretty useless. Years ago, I
    had a rescue disk with all the manufacturer's setup tools, but no
    longer. Checking the cables next. A 2.5 inch CD is available for
    laptops, but solid state makes no noise, and has no indicator light. A different SATA data cable that used to run a failed CD, made no difference.

    Not sure what the sata cable is for for a solid state disk. They usually
    mount on the motherboard. Is this an external drive coming in through a
    usb port?

    The other thing is, as suggested, you might just have a dud drive.
    If the system does not recognize the disk, not even any entry under
    /dev/sd... or /dev/nvme.... then you have either installed it
    incorrectly or you have a dead drive.



    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From red floyd@2:250/1 to All on Wed Jan 12 04:47:08 2022
    On 1/11/2022 8:37 PM, William Unruh wrote:

    Not sure what the sata cable is for for a solid state disk. They usually mount on the motherboard. Is this an external drive coming in through a
    usb port?

    There are SSDs in a 2.5 or 3.5 inch form factor with a SATA connector, as
    well as NVME drives, which connect directly to the motherboard.



    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From Bit Twister@2:250/1 to All on Wed Jan 12 09:01:57 2022
    On Wed, 12 Jan 2022 14:15:37 +1100, Doug Laidlaw wrote:
    On 12/1/22 02:33, Bit Twister wrote:
    If it were me I would get/burn/load the latest systemrescue-8.07-amd64.iso >> Click gparted in the bottom tray icom and see if gparted can create a disk >> partition table, a partition and format it.

    Did that. I have a recent copy of system rescue. Also tried the one for Mageia. It can't find a drive either. I was thinking that once I have
    a partition table, I would be set, but I can't get that far, even.

    I still have Rescatux, but it seems to be pretty useless. Years ago, I
    had a rescue disk with all the manufacturer's setup tools, but no
    longer. Checking the cables next. A 2.5 inch CD is available for
    laptops, but solid state makes no noise, and has no indicator light. A different SATA data cable that used to run a failed CD, made no difference.


    I probably missed it but what is the model and vendor name of the device?
    I want to look at the user manual.

    as root check for it with
    hwinfo --short > hwinfo.short
    and hwinfo > hwinfo.long

    and look through the hwinfo,short and hwinfo,long files.





    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From TJ@2:250/1 to All on Wed Jan 12 16:09:34 2022
    On 1/11/22 23:37, William Unruh wrote:
    On 2022-01-12, Doug Laidlaw <laidlaws@hotkey.net.au> wrote:
    I still have Rescatux, but it seems to be pretty useless. Years ago, I
    had a rescue disk with all the manufacturer's setup tools, but no
    longer. Checking the cables next. A 2.5 inch CD is available for
    laptops, but solid state makes no noise, and has no indicator light. A
    different SATA data cable that used to run a failed CD, made no difference.

    Not sure what the sata cable is for for a solid state disk. They usually mount on the motherboard. Is this an external drive coming in through a
    usb port?

    There are external drives that connect via an eSATA port that is present
    on some systems. I have one on the back of one of my desktops. It
    connects to the motherboard SATA system for data, but needs external
    power for the device. The connection transfers data at SATA speeds,
    faster than usb 2.0, more comparable to usb 3.

    The other thing is, as suggested, you might just have a dud drive.
    If the system does not recognize the disk, not even any entry under /dev/sd... or /dev/nvme.... then you have either installed it
    incorrectly or you have a dead drive.


    On my system, any device connected to the eSATA port is mounted as a "removable" device, just the same as it would be if connected via usb.

    TJ

    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From Doug Laidlaw@2:250/1 to All on Thu Jan 13 01:38:37 2022
    On 12/1/22 20:01, Bit Twister wrote:
    I probably missed it but what is the model and vendor name of the device?
    I want to look at the user manual.

    as root check for it with
    hwinfo --short > hwinfo.short
    and hwinfo > hwinfo.long

    and look through the hwinfo,short and hwinfo,long files.




    It is a brand new Western Digital "WD Blue" series, using SATA
    connections. The current series seems to be "WD Purple." While it was connected, the Journal showed SATA errors, probably because it could not
    be recognized. These stopped as soon as I unlinked the drive.

    "Jan 13 11:39:45 dougshost.douglaidlaw.net kernel: ata4: SATA link down (SStatus 1 SControl 300)"

    (A guy on a forum with the same issue was told to swap his cables around
    -- on a laptop! I had already tried that.)

    As I mentioned, a YouTube tutorial uses the same drive; only the
    capacity is different. The setup screen there is taken straight from
    the Western Digital Web site. The drive should be listed in Windows as
    an "uninitialised device," not just plain "not there." I haven't tried updating my BIOS; the last time I tried, mine was treated as obsolete.
    gparted can't see it nor can two partition managers for Windows. I am thinking that I would be better off replacing the outboard drive, or it
    may not need replacing: it can be read, but the data is corrupted; no
    inode numbers can be found to boot from.

    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: Aioe.org NNTP Server (2:250/1@fidonet)
  • From Bit Twister@2:250/1 to All on Thu Jan 13 02:30:23 2022
    On Thu, 13 Jan 2022 12:38:37 +1100, Doug Laidlaw wrote:
    On 12/1/22 20:01, Bit Twister wrote:
    I probably missed it but what is the model and vendor name of the device?
    I want to look at the user manual.

    as root check for it with
    hwinfo --short > hwinfo.short
    and hwinfo > hwinfo.long

    and look through the hwinfo,short and hwinfo,long files.




    It is a brand new Western Digital "WD Blue" series, using SATA
    connections. The current series seems to be "WD Purple." While it was connected, the Journal showed SATA errors, probably because it could not
    be recognized. These stopped as soon as I unlinked the drive.

    "Jan 13 11:39:45 dougshost.douglaidlaw.net kernel: ata4: SATA link down (SStatus 1 SControl 300)"
    (A guy on a forum with the same issue was told to swap his cables around
    -- on a laptop! I had already tried that.)


    Quick google search on SStatus 1 SControl 300
    says to swap cables to working drive to prove not a cable/wiring problem
    then there is this link.

    https://forum.armbian.com/topic/16030-ata1-sata-link-down-sstatus-0-scontrol-300/


    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From Doug Laidlaw@2:250/1 to All on Thu Jan 13 10:21:27 2022
    On 13/1/22 13:30, Bit Twister wrote:
    Quick google search on SStatus 1 SControl 300
    says to swap cables to working drive to prove not a cable/wiring problem
    then there is this link.

    https://forum.armbian.com/topic/16030-ata1-sata-link-down-sstatus-0-scontrol-300/

    That was one of the first things I tried.

    I can now see the Seagate drive in Windows. To keep Win 10 working, I
    have installed a maintenance tool from IoBIT called "Advanced System
    Care." It has a full quiver of apps. It called a program called Disk Doctor,which blanked the disk and handed back an empty drive. Linux
    still can't see it, but I can now get it working again. The backup data
    will change with time, and I can download the contents of the other two partitions again. But I am not counting my chickens, just yet.

    Thanks everybody for your help.



    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: Aioe.org NNTP Server (2:250/1@fidonet)
  • From TJ@2:250/1 to All on Thu Jan 13 17:00:11 2022
    On 1/13/22 05:21, Doug Laidlaw wrote:
    On 13/1/22 13:30, Bit Twister wrote:
    Quick google search on SStatus 1 SControl 300
    says to swap cables to working drive to prove not a cable/wiring problem
    then there is this link.

    https://forum.armbian.com/topic/16030-ata1-sata-link-down-sstatus-0-scontrol-300/


    That was one of the first things I tried.

    I can now see the Seagate drive in Windows.  To keep Win 10 working, I
    have installed a maintenance tool from IoBIT called "Advanced System Care."  It has a full quiver of apps.  It called a program called Disk Doctor,which blanked the disk and handed back an empty drive. Linux
    still can't see it, but I can now get it working again.  The backup data will change with time, and I can download the contents of the other two partitions again. But I am not counting my chickens, just yet.

    Thanks everybody for your help.


    I've seen the time, just recently in fact, where a drive wasn't
    recognized by Mageia if hot-plugged while the system was running, but
    was if the system was booted with the drive connected.

    This particular drive was an ssd that had been the main drive in a
    laptop, but had been replaced with a bigger drive. If I hot-plugged it
    into my SATA drive bay Mageia couldn't see it, and neither could gparted
    if run from within Mageia. But, if it was in place during the boot,
    either Mageia or free-standing gparted could see it.

    You might want to try that...

    TJ

    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From Doug Laidlaw@2:250/1 to All on Fri Jan 14 20:32:33 2022
    On 14/1/22 04:00, TJ wrote:
    I've seen the time, just recently in fact, where a drive wasn't
    recognized by Mageia if hot-plugged while the system was running, but
    was if the system was booted with the drive connected.

    I have seen the opposite. Xfce has a setting, "mount a drive which is hotplugged." It seems to do exactly that: USB sticks which are present
    on bootup are not detected; to have them detected, unplug them and then
    put them back. Trying a different USB port is recommended, but usually,
    it is unnecessary.

    --- MBSE BBS v1.0.7.24 (GNU/Linux-x86_64)
    * Origin: Aioe.org NNTP Server (2:250/1@fidonet)