• Erasing a hard disk

    From Grimble@2:250/1 to All on Wed Jun 8 16:32:52 2022
    I have a Buffalo Linkstation with 2 x 2TB disks in a Raid 1
    configuration. I found out that one disk was defective, so I bought an identical one on ebay, secondhand because my model is no longer in
    production. I need to erase it (it was formatted as XFS) so that it can
    be used to recreate the raid array. I connected it via a USB adaptor and
    gave the command
    dd if=/dev/zero of=/dev/sdc bs=1M

    but got the message
    dd: error writing '/dev/sdc': No space left on device
    32117+0 records in
    32116+0 records out
    33676349440 bytes (34 GB, 31 GiB) copied, 9.70154 s, 3.5 GB/s

    which surprised me, because I thought that dd command was highly
    destructive. If that means it only wrote to about 15% of the disk, does
    that mean that 85% of the disk cannot be used?
    Can someone provide more insight please?
    --
    Grimble
    Machine 'Haydn' running Plasma 5.20.4 on 5.15.43-desktop-1.mga8 kernel.
    Mageia release 8 (Official) for x86_64

    --- MBSE BBS v1.0.8 (Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From William Unruh@2:250/1 to All on Wed Jun 8 18:17:56 2022
    Yes, dd is "highly destructive". I would worry about writing 0 to the
    whole disk that it would also destroy the low level formatting of the
    disk.

    On https://askubuntu.com/questions/17640/how-can-i-securely-erase-a-hard-drive it says
    "dd halts at the first bad block, and fails to clobber the rest (unless
    I painfully use skip=... to jump ahead each time it stops)."

    so it seems like that disk has bad blocks. Do you really want to be
    using a hard drive which has bad blocks ? It sounds like it is on its
    last legs which is probably why it was sold on ebay in the first place.

    Why not just buy two new hard disks to replace both of your disks. One
    of them has failed. What are the chances the other will fail in the next
    while?


    Also why wipe the disk? Why not just reformat it for use as a raid?




    On 2022-06-08, Grimble <grimble@nomail.afraid.org> wrote:
    I have a Buffalo Linkstation with 2 x 2TB disks in a Raid 1
    configuration. I found out that one disk was defective, so I bought an identical one on ebay, secondhand because my model is no longer in production. I need to erase it (it was formatted as XFS) so that it can
    be used to recreate the raid array. I connected it via a USB adaptor and gave the command
    dd if=/dev/zero of=/dev/sdc bs=1M

    but got the message
    dd: error writing '/dev/sdc': No space left on device
    32117+0 records in
    32116+0 records out
    33676349440 bytes (34 GB, 31 GiB) copied, 9.70154 s, 3.5 GB/s

    which surprised me, because I thought that dd command was highly destructive. If that means it only wrote to about 15% of the disk, does
    that mean that 85% of the disk cannot be used?
    Can someone provide more insight please?

    --- MBSE BBS v1.0.8 (Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From David W. Hodgins@2:250/1 to All on Wed Jun 8 18:44:58 2022
    On Wed, 08 Jun 2022 11:32:52 -0400, Grimble <grimble@nomail.afraid.org> wrote:

    I have a Buffalo Linkstation with 2 x 2TB disks in a Raid 1
    configuration. I found out that one disk was defective, so I bought an identical one on ebay, secondhand because my model is no longer in production. I need to erase it (it was formatted as XFS) so that it can
    be used to recreate the raid array. I connected it via a USB adaptor and
    gave the command
    dd if=/dev/zero of=/dev/sdc bs=1M

    but got the message
    dd: error writing '/dev/sdc': No space left on device
    32117+0 records in
    32116+0 records out
    33676349440 bytes (34 GB, 31 GiB) copied, 9.70154 s, 3.5 GB/s

    which surprised me, because I thought that dd command was highly
    destructive. If that means it only wrote to about 15% of the disk, does
    that mean that 85% of the disk cannot be used?
    Can someone provide more insight please?

    Are you sure sdc isn't a usb stick and not the hard disk?

    Regards, Dave Hodgins

    --- MBSE BBS v1.0.8 (Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From Grimble@2:250/1 to All on Thu Jun 9 12:40:25 2022
    On 08/06/2022 18:44, David W. Hodgins wrote:
    On Wed, 08 Jun 2022 11:32:52 -0400, Grimble <grimble@nomail.afraid.org> wrote:

    I have a Buffalo Linkstation with 2 x 2TB disks in a Raid 1
    configuration. I found out that one disk was defective, so I bought an
    identical one on ebay, secondhand because my model is no longer in
    production. I need to erase it (it was formatted as XFS) so that it can
    be used to recreate the raid array. I connected it via a USB adaptor and
    gave the command
    dd if=/dev/zero of=/dev/sdc bs=1M

    but got the message
    dd: error writing '/dev/sdc': No space left on device
    32117+0 records in
    32116+0 records out
    33676349440 bytes (34 GB, 31 GiB) copied, 9.70154 s, 3.5 GB/s

    which surprised me, because I thought that dd command was highly
    destructive. If that means it only wrote to about 15% of the disk, does
    that mean that 85% of the disk cannot be used?
    Can someone provide more insight please?

    Are you sure sdc isn't a usb stick and not the hard disk?

    Regards, Dave Hodgins
    Hello Dave. Yes, lsblk recognises /dev/sdc as 2TB/1.8TiB SCSI disk.

    --
    Grimble
    Machine 'Haydn' running Plasma 5.20.4 on 5.15.43-desktop-1.mga8 kernel.
    Mageia release 8 (Official) for x86_64

    --- MBSE BBS v1.0.8 (Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From Grimble@2:250/1 to All on Thu Jun 9 13:50:33 2022
    On 08/06/2022 18:17, William Unruh wrote:
    Yes, dd is "highly destructive". I would worry about writing 0 to the
    whole disk that it would also destroy the low level formatting of the
    disk.

    On https://askubuntu.com/questions/17640/how-can-i-securely-erase-a-hard-drive
    it says
    "dd halts at the first bad block, and fails to clobber the rest (unless
    I painfully use skip=... to jump ahead each time it stops)."

    so it seems like that disk has bad blocks. Do you really want to be
    using a hard drive which has bad blocks ? It sounds like it is on its
    last legs which is probably why it was sold on ebay in the first place.

    Why not just buy two new hard disks to replace both of your disks. One
    of them has failed. What are the chances the other will fail in the next while?

    I wanted to protect my 2 years' backups by repairing the current
    degraded raid array. Your point about potential failure had not escaped
    me, but I thought I could get some safety first.

    Also why wipe the disk? Why not just reformat it for use as a raid?

    Buffalo technical help say the drive should not be pre-formatted before recovering the RAID array, which is why I am trying to erase current xfs format.



    On 2022-06-08, Grimble <grimble@nomail.afraid.org> wrote:
    I have a Buffalo Linkstation with 2 x 2TB disks in a Raid 1
    configuration. I found out that one disk was defective, so I bought an
    identical one on ebay, secondhand because my model is no longer in
    production. I need to erase it (it was formatted as XFS) so that it can
    be used to recreate the raid array. I connected it via a USB adaptor and
    gave the command
    dd if=/dev/zero of=/dev/sdc bs=1M

    but got the message
    dd: error writing '/dev/sdc': No space left on device
    32117+0 records in
    32116+0 records out
    33676349440 bytes (34 GB, 31 GiB) copied, 9.70154 s, 3.5 GB/s

    which surprised me, because I thought that dd command was highly
    destructive. If that means it only wrote to about 15% of the disk, does
    that mean that 85% of the disk cannot be used?
    Can someone provide more insight please?


    --
    Grimble
    Machine 'Haydn' running Plasma 5.20.4 on 5.15.43-desktop-1.mga8 kernel.
    Mageia release 8 (Official) for x86_64

    --- MBSE BBS v1.0.8 (Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From David W. Hodgins@2:250/1 to All on Thu Jun 9 14:17:19 2022
    On Thu, 09 Jun 2022 07:40:25 -0400, Grimble <grimble@nomail.afraid.org> wrote:
    Hello Dave. Yes, lsblk recognises /dev/sdc as 2TB/1.8TiB SCSI disk.

    What I do with an old dead drive is drill a few holes through it.

    Regards, Dave Hodgins

    --- MBSE BBS v1.0.8 (Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From William Unruh@2:250/1 to All on Thu Jun 9 16:45:18 2022
    On 2022-06-09, Grimble <grimble@nomail.afraid.org> wrote:
    On 08/06/2022 18:17, William Unruh wrote:
    Yes, dd is "highly destructive". I would worry about writing 0 to the
    whole disk that it would also destroy the low level formatting of the
    disk.

    On https://askubuntu.com/questions/17640/how-can-i-securely-erase-a-hard-drive
    it says
    "dd halts at the first bad block, and fails to clobber the rest (unless
    I painfully use skip=... to jump ahead each time it stops)."

    so it seems like that disk has bad blocks. Do you really want to be
    using a hard drive which has bad blocks ? It sounds like it is on its
    last legs which is probably why it was sold on ebay in the first place.

    Why not just buy two new hard disks to replace both of your disks. One
    of them has failed. What are the chances the other will fail in the next
    while?

    I wanted to protect my 2 years' backups by repairing the current
    degraded raid array. Your point about potential failure had not escaped
    me, but I thought I could get some safety first.

    Also why wipe the disk? Why not just reformat it for use as a raid?

    Buffalo technical help say the drive should not be pre-formatted before recovering the RAID array, which is why I am trying to erase current xfs format.

    One suggestion I read somewhere is to do a destructive bad-blocks test
    on the drive. This writes to every location on the drive and reports all bad-blocks found. If it is true that dd stops on bad blocks, (and I have
    no reason to doubt it) then that would tell you exactly how bad your
    drive is, and erase all the data at the same time. All current informed
    opinion seems to be be that anything that is written over, the previous
    data cannot be discovered in any way whatsoever, but then you are not interested in removing old data, just in removing the formatting on the
    disk so you can let the raid software format it. It might be sufficient
    simply to remove all formatting from the disk-- ie, just erase the
    formatting. You probably do not care about what data is on the disk now,
    just that the raid software can use the disk, overwriting the current
    data.

    But from the evidence I suspect you have a disk with bad blocks. A few
    might not be so bad, but I would suggest that the reason the disk was
    sold on ebay was that it is going-- ie it was developing lots of bad
    blocks. From what I know these things tend to avalanch. Eg a tiny piece
    of dirt got in there ( eg the surface of the drive flaked off a tiny
    piece) at that point you get an avalanch as it gets trapped between the
    head and the disk and scratches another part of
    the disk, releasing more particles, which then release more and more,
    until the inside of the disk is a ruststorm.

    So, I would strongly urge you to do a badblocks test on the disk.
    Someone decided to sell it. Now they could have just replaced a
    perfectly good full disk with a larger one. But more likely in my mind
    is that they started having some trouble with the disk and decided to replace it.
    And then irresponsibly decided to sell the old one (buyer beware).

    I presume that you can still backup all of the data on your one good
    raid disk (depends on the type of raid you had on them). Just buy two
    new disks.

    --- MBSE BBS v1.0.8 (Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)