• Re: Do you use badblocks ?

    From Andy Burns@21:1/5 to Spiros Bousbouras on Sat Dec 2 13:06:03 2023
    Spiros Bousbouras wrote:

    I mean use badblocks either directly or through the -c option of mkfs and similar commands.

    My understanding is that , if a sector of a drive seems to go bad , the drive software automatically "remaps" it (or whatever the appropriate verb is) to
    a different sector so does running badblocks actually buy you anything ?


    I presume your disc isn't so old that it comes with a separate print-out
    of the bad sector numbers?

    Is badblocks only intended for hard drives or is it also potentially useful for solid state drives ?

    I would just leave the spare block management to the controller onboard
    the SSD, it'll understand at a lower level what is going on

    If you use badblocks , do you do a read-write test or just read ? Some months
    ago I did with a new 128 gigabytes SSD

    mkfs.ext3 -c -c -j /dev/sdc1

    and it was taking forever. After several days during which progress seemed to be slowing down (based on the progress percentages printed by the programme) I stopped it and did a read only test which worked fine.

    I should think that all that any write tests would achieve is using up
    the write-cycles of the SSD :-(

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Andy Burns on Sat Dec 2 13:28:39 2023
    On 02/12/2023 13:06, Andy Burns wrote:
    Some months
    ago I did with a new 128 gigabytes SSD

         mkfs.ext3 -c -c -j /dev/sdc1

    and it was taking forever. After several days during which progress
    seemed to
    be slowing down (based on the progress percentages printed by the
    programme)
    I stopped it and did a read only test which worked fine.

    I should think that all that any write tests would achieve is using up
    the write-cycles of the SSD 🙁

    Indeed.

    --
    Renewable energy: Expensive solutions that don't work to a problem that
    doesn't exist instituted by self legalising protection rackets that
    don't protect, masquerading as public servants who don't serve the public.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Marco Moock@21:1/5 to All on Sat Dec 2 14:51:37 2023
    Am 02.12.2023 um 12:44:46 Uhr schrieb Spiros Bousbouras:

    My understanding is that , if a sector of a drive seems to go bad ,
    the drive software automatically "remaps" it (or whatever the
    appropriate verb is) to a different sector so does running badblocks
    actually buy you anything ?

    That applies to HDDs, but mostly not to floppies, USB thumb drives nor
    tapes.

    The amount of reserve sectors is also limited, so if too many are bad,
    they can't be remapped and badblocks will find them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David W. Hodgins@21:1/5 to Spiros Bousbouras on Sat Dec 2 12:24:22 2023
    On Sat, 02 Dec 2023 07:44:46 -0500, Spiros Bousbouras <spibou@gmail.com> wrote:
    If you use badblocks , do you do a read-write test or just read ? Some months
    ago I did with a new 128 gigabytes SSD

    mkfs.ext3 -c -c -j /dev/sdc1

    and it was taking forever. After several days during which progress seemed to be slowing down (based on the progress percentages printed by the programme) I stopped it and did a read only test which worked fine.

    Don't use badblocks on ssd drives.

    On my oldest ssd drive ...
    # smartctl -a /dev/sdb|grep -e Reallocate -e Lifetime
    5 Reallocated_Sector_Ct 0x0000 100 100 000 Old_age Offline - 1
    232 Lifetime_Writes 0x0000 100 100 000 Old_age Offline - 95314068939

    An ssd drive uses pages, some multiple number of sectors in size. When the os issues a write, the ssd controller in the drive must find an available page, either from a table of known unused pages, or by searching for one.

    When running badblocks, it works during file system creation before the ssd controller can know which blocks are not actually used within the file system, so once it runs out of reserve pages, it has to search for unused pages.

    Keeping track of what pages are used works at the file system level, after the partition has been formatted.

    With ssd drives, either the file system has to be mounted using the discard option,
    or the fstrim command must be run periodically, to let the ssd controller know which blocks are not in use, so that the ssd controller can maintain it's table of available pages.

    When the ssd drive controller has to search for unused pages, write performance drops to a speed that makes a floppy drive look fast.

    Also keep in mind that partitions must be aligned on the page erase size used by the ssd drive. Most partitioning software will do that automatically now, but older software may not.

    Regards, Dave Hodgins

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to Spiros Bousbouras on Sat Dec 2 18:20:11 2023
    Spiros Bousbouras wrote:

    What is the difference with hard disks ? Is it for example that a new SSD is guaranteed to initially write everything correctly so running badblocks is a waste whereas even a new hard disk may have blocks where writing won't
    work ?

    I don't suppose any storage device guarantees all sectors are writeable
    out of the box, they just hope to have enough spares to cope, but who
    knows what damage it might take in transit if you're unlucky?

    Probably hard discs and SSDs tell so many lies to the operating system,
    that trying to outsmart the manufacturer isn't a game worth playing?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to Spiros Bousbouras on Sat Dec 2 20:32:05 2023
    Spiros Bousbouras <spibou@gmail.com> wrote:
    On Sat, 2 Dec 2023 14:51:37 +0100
    Marco Moock <mm+usenet-es@dorfdsl.de> wrote:
    Am 02.12.2023 um 12:44:46 Uhr schrieb Spiros Bousbouras:

    My understanding is that , if a sector of a drive seems to go bad ,
    the drive software automatically "remaps" it (or whatever the
    appropriate verb is) to a different sector so does running badblocks
    actually buy you anything ?

    That applies to HDDs, but mostly not to floppies, USB thumb drives nor
    tapes.

    The amount of reserve sectors is also limited, so if too many are bad,
    they can't be remapped and badblocks will find them.

    And how is that useful ? badblocks itself simply prints the list of bad blocks to stdout. Is this useful?

    Yes. If you are running it from one of the fsck.ext? programs, then
    that "print list to stdout" goes to fsck.ext? and is used to update the filesystem badblock list. Which then means the filesystem will skip
    using those blocks, because they are marked bad.

    If you are running it yourself, then this gives you a count of the
    number of bad blocks, which is information you might find useful in
    evaluating how much life the disk may have left.

    If you invoke badblocks indirectly by using one of the mkfs*
    utilities then I assume what happens is that the list of bad blocks
    will be written at an appropriate location on the filesystem so that
    the operating system can avoid writing on those blocks. Do I have
    this right ?

    Almost. So the *filesystem* can avoid writing to those blocks.

    I was also wondering , would it be possible for the operating system
    to write some data and then immediately read it back to make sure
    that it was written correctly?

    Possible in general: yes. In fact, the ancient Atari DOS for the very
    old Atari 810 5.25 floppy disk, if set to do so, would do exactly this.
    Write a sector, then immediately read it back and compare. Of course
    doing so slowed down disk writes by a considerable amount (and they
    were not anywhere near fast by today's standards to begin with).

    Can Linux be configured to do so, not unless one of the more esoteric filesystems has a "read after write" configuration option.

    The advantage relative to using badblocks is that with badblocks some
    blocks may become bad after you have run badblocks whereas with the
    method I'm proposing the checks would be ongoing.

    And your write speed would slow down to one disk block per disk platter rotation (or set of blocks, depending upon how it was implemented).
    For most users, the overall reliability is already sufficiently high
    enough that slowing down writes by 50x or 100x is not worth the
    fractional percentage gain in overall reliability.

    Obviously what I'm suggesting would negatively affect performance but
    it could be tunable ; like you could have an option for mounting a
    filesystem with such a functionality enabled or not. Does such a functionality exist all on Linux (with some types of filesystems) ?

    It might. I do not know of any that have it, but I also do not know
    the details of all the possible filesystems that are bundled with (or
    can be added to) a Linux kernel.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John-Paul Stewart@21:1/5 to Spiros Bousbouras on Sat Dec 2 15:40:12 2023
    On 2023-12-02 07:44, Spiros Bousbouras wrote:
    I mean use badblocks either directly or through the -c option of mkfs and similar commands.

    My understanding is that , if a sector of a drive seems to go bad , the drive software automatically "remaps" it (or whatever the appropriate verb is) to
    a different sector so does running badblocks actually buy you anything ?

    Badblocks is a leftover from a bygone era.

    For the last 30 years or so hard drives have had spare sectors and
    remapped bad ones to good ones, transparently to the end-user. Just
    like you said.

    Before that time (1980s and earlier, really) that wasn't the case. As
    another poster alluded to, hard drives of that era would come with a
    piece of paper listing the known bad blocks from manufacturing defects,
    with space for you to write in additional entries as you found them.
    That list could then be used when creating a filesystem (e.g., see the
    -l option to mke2fs) so that the filesystem would not use those known
    bad blocks.

    But none of that is necessary on modern hard drives or SSDs with
    hardware-level sector remapping. If you see a bad block on one of
    those, it's a sign you've already exhausted all of the spare sectors on
    it. Most people would replace the drive long before that happened,
    usually as soon as SMART starts showing any remapped sectors.

    Plus, defect rates on today's equipment are several orders of magnitude
    lower than they were when the badblocks utility was needed. Back then,
    it was assumed that every disk would have a few defects.

    Today's drives are also vastly cheaper, so it is viable to just replace
    ones showing bad blocks rather than needing to extend their lives by
    working around defects.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to Spiros Bousbouras on Sat Dec 2 20:23:10 2023
    Spiros Bousbouras <spibou@gmail.com> wrote:
    I mean use badblocks either directly or through the -c option of
    mkfs and similar commands.

    Yes to both (although not frequently).

    My understanding is that, if a sector of a drive seems to go bad, the
    drive software automatically "remaps" it (or whatever the appropriate
    verb is) to a different sector

    That depends on the drive controller. Most do so (at least most more
    modern disks do) but your mileage may vary if you have a large enough
    set of different devices. Note here by "devices" I'm including USB
    thumb drives (which vary wildly in quality and reliability).

    so does running badblocks actually buy you anything ?

    That depends upon /why/ you want to run it, and what you are running it
    upon.

    Is badblocks only intended for hard drives or is it also
    potentially useful for solid state drives ?

    The 'badblocks' command is used to scan Linux block devices, so the
    'intended' use is to scan "block devices". Whether it is useful
    against whatever hardware is backing the Linux block device is a
    different question.

    If you use badblocks , do you do a read-write test or just read ?

    I've done both in the past. Although I've used read-only scans far
    more often than read-write scans.

    Some months ago I did with a new 128 gigabytes SSD

    mkfs.ext3 -c -c -j /dev/sdc1

    and it was taking forever. After several days during which progress
    seemed to be slowing down (based on the progress percentages printed
    by the programme) I stopped it and did a read only test which worked
    fine.

    Yes, 128G read/write will take some time, and when you cause the SSD to
    run out of spare clean chunks of flash, you get to wait through while
    it cleans up flash chunks before further writes occur.

    And, you consume write cycles on the flash, which for an SSD is
    directly correlated with eventual failure.

    Here are the scenarios where I've used badblocks:

    1) I have a stack of old drives pulled from various machines over time
    (could be spinning rust, could be SSD) but I don't know which are good
    vs. bad. So I hook each up (a USB to disk adapter works well here)
    and run a badblocks read-only scan on the device. If badblocks throws
    errors I know to "recycle" that piece of hardware. If it completes
    without error then I know that "drive" at least worked to read the
    whole disk.

    2) I have a disk in a machine that starts to throw read errors in a few
    spots in the kernel logs. And I don't have a spare to replace it with
    just yet. Unmounnt, run badblocks (via a fsck.ext? scan, usually in
    read-only mode) against the partition to let the filesystem map out the
    bad sectors and hopefully "get by" on the remainder of the disk until a replacement disk can be purchased. Doing this can also give a 'clue'
    as to how close to failure a disk is. If the number of bad blocks
    remains steady after mapping them away, one likely has more time than
    if the number of bad blocks continues to grow over time.

    Only a handful of times (i.e., five or fewer) have I done a read/write badblocks scan in this scenario. I wanted to see if a write to the bad
    sectors would cause the controller to remap the sector to a spare. If
    I remember right, overwriting them didn't help any.

    3) You have a brand new, fresh from the factory, just opened, disk, and
    you want to give it a little bit of "burn-in testing" to see if it will
    be one of the small percentage that fail soon after use. If the disk
    is spinning rust, then a badblocks read-write scan will both burn-in
    test the disk by stressing it, and let you verify there are no bad
    sectors direct from the factory. If the disk is SSD, better to just do
    a read-only test rather than starting out by write wearing every flash
    block in the drive.

    4) You want to periodically scan your active disks looking for bad
    sectors to catch disk failures early, while there still may be time to
    obtain, copy to, and install a replacement. So you setup a cron job to
    every so often run a read-only badblocks against each disk and compare
    the output to the previous run against that same disk. No new bad
    sectors, nothing to worry about. A new bad sector appears, well, maybe
    it is time to start planning for the eventual replacement of that disk
    in the near future.

    Note, in any case, having a proper system of backups is necessary, even
    if you use badblocks to look for and catch possible failures early
    before they become outright failures.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E. R.@21:1/5 to Spiros Bousbouras on Sat Dec 2 23:33:45 2023
    On 2023-12-02 13:44, Spiros Bousbouras wrote:
    I mean use badblocks either directly or through the -c option of mkfs and similar commands.

    No.

    My understanding is that , if a sector of a drive seems to go bad , the drive software automatically "remaps" it (or whatever the appropriate verb is) to
    a different sector so does running badblocks actually buy you anything ?

    Yes, but remapping only happens during write operations.



    Is badblocks only intended for hard drives or is it also potentially useful for solid state drives ?

    It was intended for ancient drives, before drives had automatic remapping.


    If you use badblocks , do you do a read-write test or just read ? Some months
    ago I did with a new 128 gigabytes SSD

    mkfs.ext3 -c -c -j /dev/sdc1

    and it was taking forever. After several days during which progress seemed to be slowing down (based on the progress percentages printed by the programme) I stopped it and did a read only test which worked fine.

    Notice that the real test is writing, your disk could be very faulty if
    you only do a read test.

    When a disk is suspect, first I do a SMART parameters read, then I do
    the long test, and possibly, I then write zeroes to the entire disk, and another long test. Then I check whether the remapped sector count has increased. If it did, I discard the disk.

    --
    Cheers,
    Carlos E.R.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E. R.@21:1/5 to Rich on Sat Dec 2 23:27:06 2023
    On 2023-12-02 21:32, Rich wrote:
    Spiros Bousbouras <spibou@gmail.com> wrote:
    On Sat, 2 Dec 2023 14:51:37 +0100
    Marco Moock <mm+usenet-es@dorfdsl.de> wrote:
    Am 02.12.2023 um 12:44:46 Uhr schrieb Spiros Bousbouras:

    ...

    I was also wondering , would it be possible for the operating system
    to write some data and then immediately read it back to make sure
    that it was written correctly?

    Possible in general: yes. In fact, the ancient Atari DOS for the very
    old Atari 810 5.25 floppy disk, if set to do so, would do exactly this.
    Write a sector, then immediately read it back and compare. Of course
    doing so slowed down disk writes by a considerable amount (and they
    were not anywhere near fast by today's standards to begin with).

    Can Linux be configured to do so, not unless one of the more esoteric filesystems has a "read after write" configuration option.

    Better software verified a track after writing it, not a sector.
    Minimize movements.



    The advantage relative to using badblocks is that with badblocks some
    blocks may become bad after you have run badblocks whereas with the
    method I'm proposing the checks would be ongoing.

    And your write speed would slow down to one disk block per disk platter rotation (or set of blocks, depending upon how it was implemented).
    For most users, the overall reliability is already sufficiently high
    enough that slowing down writes by 50x or 100x is not worth the
    fractional percentage gain in overall reliability.

    MsDOS could do exactly this. The setting was "VERIFY=ON".



    Obviously what I'm suggesting would negatively affect performance but
    it could be tunable ; like you could have an option for mounting a
    filesystem with such a functionality enabled or not. Does such a
    functionality exist all on Linux (with some types of filesystems) ?

    It might. I do not know of any that have it, but I also do not know
    the details of all the possible filesystems that are bundled with (or
    can be added to) a Linux kernel.

    I don't remember seeing it.

    --
    Cheers,
    Carlos E.R.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robert Riches@21:1/5 to Carlos E. R. on Sun Dec 3 04:16:50 2023
    On 2023-12-02, Carlos E. R. <robin_listas@es.invalid> wrote:
    On 2023-12-02 21:32, Rich wrote:
    Spiros Bousbouras <spibou@gmail.com> wrote:
    On Sat, 2 Dec 2023 14:51:37 +0100
    Marco Moock <mm+usenet-es@dorfdsl.de> wrote:
    Am 02.12.2023 um 12:44:46 Uhr schrieb Spiros Bousbouras:

    ...

    I was also wondering , would it be possible for the operating system
    to write some data and then immediately read it back to make sure
    that it was written correctly?

    Possible in general: yes. In fact, the ancient Atari DOS for the very
    old Atari 810 5.25 floppy disk, if set to do so, would do exactly this.
    Write a sector, then immediately read it back and compare. Of course
    doing so slowed down disk writes by a considerable amount (and they
    were not anywhere near fast by today's standards to begin with).

    Can Linux be configured to do so, not unless one of the more esoteric
    filesystems has a "read after write" configuration option.

    Better software verified a track after writing it, not a sector.
    Minimize movements.



    The advantage relative to using badblocks is that with badblocks some
    blocks may become bad after you have run badblocks whereas with the
    method I'm proposing the checks would be ongoing.

    And your write speed would slow down to one disk block per disk platter
    rotation (or set of blocks, depending upon how it was implemented).
    For most users, the overall reliability is already sufficiently high
    enough that slowing down writes by 50x or 100x is not worth the
    fractional percentage gain in overall reliability.

    MsDOS could do exactly this. The setting was "VERIFY=ON".

    About 40 years ago, I saw that functionality described in some
    VMS documentation. However, I don't remember now whether it was
    on a per-file basis, per-open-for-write basis, or other.

    --
    Robert Riches
    spamtrap42@jacob21819.net
    (Yes, that is one of my email addresses.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to Carlos E. R. on Sun Dec 3 06:15:32 2023
    Carlos E. R. <robin_listas@es.invalid> wrote:
    On 2023-12-02 21:32, Rich wrote:
    Spiros Bousbouras <spibou@gmail.com> wrote:
    On Sat, 2 Dec 2023 14:51:37 +0100
    Marco Moock <mm+usenet-es@dorfdsl.de> wrote:
    Am 02.12.2023 um 12:44:46 Uhr schrieb Spiros Bousbouras:

    ...

    I was also wondering , would it be possible for the operating system
    to write some data and then immediately read it back to make sure
    that it was written correctly?

    Possible in general: yes. In fact, the ancient Atari DOS for the
    very old Atari 810 5.25 floppy disk, if set to do so, would do
    exactly this. Write a sector, then immediately read it back and
    compare. Of course doing so slowed down disk writes by a
    considerable amount (and they were not anywhere near fast by today's
    standards to begin with).

    Can Linux be configured to do so, not unless one of the more
    esoteric filesystems has a "read after write" configuration option.

    Better software verified a track after writing it, not a sector.
    Minimize movements.

    This was on an 8-bit, 6502 computer, with single density, single sided
    5.25 floppy disks that held 90KB each. A computer that, in its
    smallest configuration, came with either 8k or 16k of RAM (I forget
    which was the minimal config now). I doubt the authors of Atari DOS
    were thinking that buffering a track worth of data in that environment
    was worthwhile.

    As well, re-reading the same sector that was just written requires no
    head movement. But it does force the disk to wait for the sector to
    rotate around to the head again. So one gets to write at a maximum
    speed of one disk sector every two disk rotations.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to David W. Hodgins on Sun Dec 3 09:32:34 2023
    On 02/12/2023 17:24, David W. Hodgins wrote:
    Don't use badblocks on ssd drives.

    Indeed. Moist of these low level storage control tools are now
    replicated inside the SSD controller itself (e.g. badblocks), or are
    rendered unnecessary by the actual storage architecture( defragmentation)

    If there is a bad ram block, the SSD controller itself will have flagged
    that already.
    With read/write seek times being essentially zero, there is no point in defragmenting at the computer level. The wear levelling will already be rearranging where everything is stored anyway.

    The only thing the SSD needs is a trim command now and then.


    --
    “Some people like to travel by train because it combines the slowness of
    a car with the cramped public exposure of 
an airplane.”

    Dennis Miller

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to Spiros Bousbouras on Sun Dec 3 10:26:38 2023
    Spiros Bousbouras <spibou@gmail.com> writes:
    I mean use badblocks either directly or through the -c option of mkfs
    and similar commands.

    My understanding is that , if a sector of a drive seems to go bad ,
    the drive software automatically "remaps" it (or whatever the
    appropriate verb is) to a different sector

    Yes.

    so does running badblocks actually buy you anything ?

    No. Waste of time.

    Is badblocks only intended for hard drives or is it also potentially useful for solid state drives ?

    Neither. Medium error management moved inside the drives decades ago.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E. R.@21:1/5 to Robert Riches on Sun Dec 3 14:08:23 2023
    On 2023-12-03 05:16, Robert Riches wrote:
    On 2023-12-02, Carlos E. R. <robin_listas@es.invalid> wrote:
    On 2023-12-02 21:32, Rich wrote:
    Spiros Bousbouras <spibou@gmail.com> wrote:
    On Sat, 2 Dec 2023 14:51:37 +0100
    Marco Moock <mm+usenet-es@dorfdsl.de> wrote:
    Am 02.12.2023 um 12:44:46 Uhr schrieb Spiros Bousbouras:

    ...



    The advantage relative to using badblocks is that with badblocks some
    blocks may become bad after you have run badblocks whereas with the
    method I'm proposing the checks would be ongoing.

    And your write speed would slow down to one disk block per disk platter
    rotation (or set of blocks, depending upon how it was implemented).
    For most users, the overall reliability is already sufficiently high
    enough that slowing down writes by 50x or 100x is not worth the
    fractional percentage gain in overall reliability.

    MsDOS could do exactly this. The setting was "VERIFY=ON".

    About 40 years ago, I saw that functionality described in some
    VMS documentation. However, I don't remember now whether it was
    on a per-file basis, per-open-for-write basis, or other.

    It was a setting that applied to programs started after setting it,
    AFAIR. It was usually done in autoexec.bat, IIRC.

    --
    Cheers,
    Carlos E.R.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E. R.@21:1/5 to Rich on Sun Dec 3 14:09:56 2023
    On 2023-12-03 07:15, Rich wrote:
    Carlos E. R. <robin_listas@es.invalid> wrote:
    On 2023-12-02 21:32, Rich wrote:
    Spiros Bousbouras <spibou@gmail.com> wrote:
    On Sat, 2 Dec 2023 14:51:37 +0100
    Marco Moock <mm+usenet-es@dorfdsl.de> wrote:
    Am 02.12.2023 um 12:44:46 Uhr schrieb Spiros Bousbouras:

    ...

    I was also wondering , would it be possible for the operating system
    to write some data and then immediately read it back to make sure
    that it was written correctly?

    Possible in general: yes. In fact, the ancient Atari DOS for the
    very old Atari 810 5.25 floppy disk, if set to do so, would do
    exactly this. Write a sector, then immediately read it back and
    compare. Of course doing so slowed down disk writes by a
    considerable amount (and they were not anywhere near fast by today's
    standards to begin with).

    Can Linux be configured to do so, not unless one of the more
    esoteric filesystems has a "read after write" configuration option.

    Better software verified a track after writing it, not a sector.
    Minimize movements.

    This was on an 8-bit, 6502 computer, with single density, single sided
    5.25 floppy disks that held 90KB each. A computer that, in its
    smallest configuration, came with either 8k or 16k of RAM (I forget
    which was the minimal config now). I doubt the authors of Atari DOS
    were thinking that buffering a track worth of data in that environment
    was worthwhile.

    As well, re-reading the same sector that was just written requires no
    head movement. But it does force the disk to wait for the sector to
    rotate around to the head again. So one gets to write at a maximum
    speed of one disk sector every two disk rotations.


    Aye. Movements ;-)

    --
    Cheers,
    Carlos E.R.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From MarioCCCP@21:1/5 to The Natural Philosopher on Tue Dec 5 14:42:34 2023
    On 03/12/23 10:32, The Natural Philosopher wrote:
    On 02/12/2023 17:24, David W. Hodgins wrote:
    Don't use badblocks on ssd drives.

    Indeed. Moist of these low level storage control tools are
    now replicated inside the SSD controller itself (e.g.
    badblocks), or are rendered unnecessary by the actual
    storage architecture( defragmentation)

    If there is a bad ram block, the SSD controller itself will
    have flagged that already.
    With read/write seek times being essentially zero, there is
    no point in defragmenting at the computer level. The wear
    levelling will already be rearranging where everything is
    stored anyway.

    The only thing the SSD needs is a trim command now and then.


    who and when issues such a command ? I am asking since I
    cannot remember of having typed it anytime :(



    --
    1) Resistere, resistere, resistere.
    2) Se tutti pagano le tasse, le tasse le pagano tutti
    MarioCPPP

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to MarioCCCP on Tue Dec 5 13:58:26 2023
    MarioCCCP <NoliMihiFrangereMentulam@libero.it> wrote:
    On 03/12/23 10:32, The Natural Philosopher wrote:
    On 02/12/2023 17:24, David W. Hodgins wrote:
    Don't use badblocks on ssd drives.

    Indeed. Moist of these low level storage control tools are
    now replicated inside the SSD controller itself (e.g.
    badblocks), or are rendered unnecessary by the actual
    storage architecture( defragmentation)

    If there is a bad ram block, the SSD controller itself will
    have flagged that already.
    With read/write seek times being essentially zero, there is
    no point in defragmenting at the computer level. The wear
    levelling will already be rearranging where everything is
    stored anyway.

    The only thing the SSD needs is a trim command now and then.

    who and when issues such a command ? I am asking since I cannot
    remember of having typed it anytime :(

    The proper 'who' is the filesystem, the proper when is when the
    filesystem deallocates blocks due to deletion. The point of a 'trim'
    command to an SSD is to let it know that the higher levels are finished
    useing differnt blocks so it can go about doing its cleanup in the
    background.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E. R.@21:1/5 to MarioCCCP on Tue Dec 5 15:44:50 2023
    On 2023-12-05 14:42, MarioCCCP wrote:
    On 03/12/23 10:32, The Natural Philosopher wrote:
    On 02/12/2023 17:24, David W. Hodgins wrote:
    Don't use badblocks on ssd drives.

    Indeed. Moist of these low level storage control tools are now
    replicated inside the SSD controller itself (e.g. badblocks), or are
    rendered unnecessary by the actual storage architecture( defragmentation)

    If there is a bad ram block, the SSD controller itself will have
    flagged that already.
    With read/write seek times being essentially zero, there is no point
    in defragmenting at the computer level. The wear levelling will
    already be rearranging where everything is stored anyway.

    The only thing the SSD needs is a trim command now and then.


    who and when issues such a command ? I am asking since I cannot remember
    of having typed it anytime :(

    It can be a mount option or a cron job or systemd timer.

    Laicolasse:~ # systemctl status fstrim.timer
    ● fstrim.timer - Discard unused blocks once a week
    Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled;
    vendor preset: enabled)
    Active: active (waiting) since Sat 2023-12-02 09:15:04 CET; 3 days ago
    Trigger: Mon 2023-12-11 01:34:19 CET; 5 days left
    Triggers: ● fstrim.service
    Docs: man:fstrim

    Dec 02 09:15:04 Laicolasse.valinor systemd[1]: Started Discard unused
    blocks once a week.
    Laicolasse:~ #


    --
    Cheers,
    Carlos E.R.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to MarioCCCP on Tue Dec 5 17:49:50 2023
    On 05/12/2023 13:42, MarioCCCP wrote:
    On 03/12/23 10:32, The Natural Philosopher wrote:
    On 02/12/2023 17:24, David W. Hodgins wrote:
    Don't use badblocks on ssd drives.

    Indeed. Moist of these low level storage control tools are now
    replicated inside the SSD controller itself (e.g. badblocks), or are
    rendered unnecessary by the actual storage architecture( defragmentation)

    If there is a bad ram block, the SSD controller itself will have
    flagged that already.
    With read/write seek times being essentially zero, there is no point
    in defragmenting at the computer level. The wear levelling will
    already be rearranging where everything is stored anyway.

    The only thing the SSD needs is a trim command now and then.


    who and when issues such a command ? I am asking since I cannot remember
    of having typed it anytime :(


    I think that most linuxes issue that command under cron by default these
    days.

    e.g on my pi4B

    $ systemctl status fstrim.timer
    ● fstrim.timer - Discard unused blocks once a week
    Loaded: loaded (/lib/systemd/system/fstrim.timer; enabled; preset: enabled)
    Active: active (waiting) since Fri 2023-12-01 08:03:15 GMT; 4 days ago
    Trigger: Mon 2023-12-11 00:11:30 GMT; 5 days left
    Triggers: ● fstrim.service
    Docs: man:fstrim

    On Mint 20.3

    $ systemctl status fstrim.timer
    ● fstrim.timer - Discard unused blocks once a week
    Loaded: loaded (/lib/systemd/system/fstrim.timer; enabled; vendor
    preset: >
    Active: active (waiting) since Fri 2023-10-27 07:27:36 BST; 1
    months 9 day>
    Trigger: Mon 2023-12-11 00:00:00 GMT; 5 days left
    Triggers: ● fstrim.service
    Docs: man:fstrim

    Oct 27 07:27:36 juliet systemd[1]: Started Discard unused blocks once a
    week.

    and so on.




    --
    All political activity makes complete sense once the proposition that
    all government is basically a self-legalising protection racket, is
    fully understood.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E. R.@21:1/5 to Spiros Bousbouras on Tue Dec 5 23:51:50 2023
    On 2023-12-05 23:45, Spiros Bousbouras wrote:
    On Sat, 2 Dec 2023 15:40:12 -0500
    John-Paul Stewart <jpstewart@personalprojects.net> wrote:
    On 2023-12-02 07:44, Spiros Bousbouras wrote:
    I mean use badblocks either directly or through the -c option of mkfs >>> and similar commands.

    My understanding is that , if a sector of a drive seems to go bad , the drive
    software automatically "remaps" it (or whatever the appropriate verb is) to >>> a different sector so does running badblocks actually buy you anything ? >>
    Badblocks is a leftover from a bygone era.

    For the last 30 years or so hard drives have had spare sectors and
    remapped bad ones to good ones, transparently to the end-user. Just
    like you said.

    Does "spare sectors" mean on top of the advertised capacity of the drive ?

    Yes.

    If
    yes , could a drive which is not full find more spare sectors by reducing its advertised capacity (and informing the operating system) ?

    No.


    Before that time (1980s and earlier, really) that wasn't the case. As
    another poster alluded to, hard drives of that era would come with a
    piece of paper listing the known bad blocks from manufacturing defects,
    with space for you to write in additional entries as you found them.
    That list could then be used when creating a filesystem (e.g., see the
    -l option to mke2fs) so that the filesystem would not use those known
    bad blocks.

    So one had to copy by hand the list to a file ?

    Or to the low format program.

    Interesting. How long such
    lists tended to be ?

    Tiny. One or two entries was normal.

    --
    Cheers,
    Carlos E.R.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to Spiros Bousbouras on Wed Dec 6 02:34:59 2023
    Spiros Bousbouras <spibou@gmail.com> wrote:
    On Sat, 2 Dec 2023 20:23:10 -0000 (UTC)
    Rich <rich@example.invalid> wrote:
    Spiros Bousbouras <spibou@gmail.com> wrote:

    [...]

    Note here by "devices" I'm including USB
    thumb drives (which vary wildly in quality and reliability).

    SanDisk and Sony have worked well for me.

    Those are two of the better brands (provided you don't accidentally get
    a fake). But they are by far from the /only/ brands.

    so does running badblocks actually buy you anything ?

    That depends upon /why/ you want to run it, and what you are running
    it upon.

    Is badblocks only intended for hard drives or is it also
    potentially useful for solid state drives ?

    The 'badblocks' command is used to scan Linux block devices, so the
    'intended' use is to scan "block devices". Whether it is useful
    against whatever hardware is backing the Linux block device is a
    different question.

    Surely the intended use is for situations where it's useful.

    Yes, for situations where you want to scan a device for read (or read
    write) errors. But it is not /meant/ for "hard drives" only. I.e.,
    one could conseivably use it to scan a tape for read errors.

    If you use badblocks, do you do a read-write test or just read ?

    I've done both in the past. Although I've used read-only scans far
    more often than read-write scans.


    [...]

    Here are the scenarios where I've used badblocks:

    [...]

    These are very useful, thank you.

    Note, in any case, having a proper system of backups is necessary,
    even if you use badblocks to look for and catch possible failures
    early before they become outright failures.

    Of course. It is still possible for a drive to fail suddenly and
    completely, isn't it ?

    Yes, and from some of the reports, SSD's seem to often fail in this
    way, little to no warning, and poof, the drive no longer works.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to Spiros Bousbouras on Wed Dec 6 03:11:15 2023
    Spiros Bousbouras <spibou@gmail.com> wrote:
    On Sat, 2 Dec 2023 15:40:12 -0500
    John-Paul Stewart <jpstewart@personalprojects.net> wrote:
    On 2023-12-02 07:44, Spiros Bousbouras wrote:
    I mean use badblocks either directly or through the -c option
    of mkfs and similar commands.

    My understanding is that, if a sector of a drive seems to go bad,
    the drive software automatically "remaps" it (or whatever the
    appropriate verb is) to a different sector so does running
    badblocks actually buy you anything ?

    Badblocks is a leftover from a bygone era.

    For the last 30 years or so hard drives have had spare sectors and
    remapped bad ones to good ones, transparently to the end-user. Just
    like you said.

    Does "spare sectors" mean on top of the advertised capacity of the
    drive?

    Yes, modern drives have additional spares beyond what is advertised.

    If yes, could a drive which is not full find more spare sectors by
    reducing its advertised capacity (and informing the operating
    system)?

    It could, but the difference would be small (i.e., you will not get 25%
    more capacity). Too small to make a difference to you the user. The
    spares are present for two reasons: to increase the percentage of "good
    drives" off the assembly line by allowing manufacturing defectsq to be
    hidden and to increase the percentage of drives that make it past the
    end of the warranty period before they visibly show issues to the
    owner.

    Before that time (1980s and earlier, really) that wasn't the case.
    As another poster alluded to, hard drives of that era would come
    with a piece of paper listing the known bad blocks from
    manufacturing defects, with space for you to write in additional
    entries as you found them. That list could then be used when
    creating a filesystem (e.g., see the -l option to mke2fs) so that
    the filesystem would not use those known bad blocks.

    So one had to copy by hand the list to a file?

    Or enter it into the "format" program in real time, one sector at a
    time, before it began formatting the drive.

    Interesting. How long such lists tended to be?

    On the top of the 20MB hard drive that was in my first PC, IIRC the
    list was about 8 or 10 sectors long.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Spiros Bousbouras on Wed Dec 6 11:45:43 2023
    On 05/12/2023 22:45, Spiros Bousbouras wrote:
    On Sat, 2 Dec 2023 15:40:12 -0500
    John-Paul Stewart <jpstewart@personalprojects.net> wrote:
    On 2023-12-02 07:44, Spiros Bousbouras wrote:
    I mean use badblocks either directly or through the -c option of mkfs >>> and similar commands.

    My understanding is that , if a sector of a drive seems to go bad , the drive
    software automatically "remaps" it (or whatever the appropriate verb is) to >>> a different sector so does running badblocks actually buy you anything ? >>
    Badblocks is a leftover from a bygone era.

    For the last 30 years or so hard drives have had spare sectors and
    remapped bad ones to good ones, transparently to the end-user. Just
    like you said.

    Does "spare sectors" mean on top of the advertised capacity of the drive ? If yes , could a drive which is not full find more spare sectors by reducing its advertised capacity (and informing the operating system) ?

    It may inform the operating system, but never the purchaser. The
    internet is full of drives that magically never have the capacity they
    were sold as.

    Before that time (1980s and earlier, really) that wasn't the case. As
    another poster alluded to, hard drives of that era would come with a
    piece of paper listing the known bad blocks from manufacturing defects,
    with space for you to write in additional entries as you found them.
    That list could then be used when creating a filesystem (e.g., see the
    -l option to mke2fs) so that the filesystem would not use those known
    bad blocks.

    So one had to copy by hand the list to a file ? Interesting. How long such lists tended to be ?

    10-20 sectors IIRC usually

    --
    "I am inclined to tell the truth and dislike people who lie consistently.
    This makes me unfit for the company of people of a Left persuasion, and
    all women"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Rich on Wed Dec 6 11:57:00 2023
    On 06/12/2023 02:34, Rich wrote:
    It is still possible for a drive to fail suddenly and
    completely, isn't it ?
    Yes, and from some of the reports, SSD's seem to often fail in this
    way, little to no warning, and poof, the drive no longer works.

    I can confirm this. relatively new (Kingston) drive under warranty
    started taking huge amounts of time to boot, SMART said it was well
    fucked in terms of recorded errors. Replaced under warranty.

    I was lucky that the drive had no irreplaceable data on it and the
    vendor was scrupulously honest.

    I can think of many scenarios where electronics could fail in such a way
    as to render huge tracts of NVRAM inaccessible- typically slightly out
    of spec chipsets (too much delay) ageing enough in a hot PC to go fully
    out of spec on some decode patterns .

    However I haven't had any failures since.

    I did monitor SMART parameters quite often on my SSDS but have given up
    because I have never found any sign of failure or a bad ram block since.
    I don't regularly hammer them.

    It is however quite interesting to see what failures do pop up in
    electronic kit: these days bad capacitors seem as prevalent as they
    always have been, but I got a completely inexplicable failure in a low
    level operational amplifier in one peice of kit I fixed. So embedded
    that it couldn't have been caused by anything external. It just failed.

    --
    It is the folly of too many to mistake the echo of a London coffee-house
    for the voice of the kingdom.

    Jonathan Swift

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Spiros Bousbouras on Wed Dec 6 12:24:21 2023
    On 05/12/2023 22:19, Spiros Bousbouras wrote:
    On Sat, 2 Dec 2023 20:23:10 -0000 (UTC)
    Rich <rich@example.invalid> wrote:
    Spiros Bousbouras <spibou@gmail.com> wrote:

    [...]

    Note here by "devices" I'm including USB
    thumb drives (which vary wildly in quality and reliability).

    SanDisk and Sony have worked well for me.

    so does running badblocks actually buy you anything ?

    That depends upon /why/ you want to run it, and what you are running it
    upon.

    Is badblocks only intended for hard drives or is it also
    potentially useful for solid state drives ?

    The 'badblocks' command is used to scan Linux block devices, so the
    'intended' use is to scan "block devices". Whether it is useful
    against whatever hardware is backing the Linux block device is a
    different question.

    Surely the intended use is for situations where it's useful.

    If you use badblocks , do you do a read-write test or just read ?

    I've done both in the past. Although I've used read-only scans far
    more often than read-write scans.


    [...]

    Here are the scenarios where I've used badblocks:

    [...]

    These are very useful , thank you.

    Note, in any case, having a proper system of backups is necessary, even
    if you use badblocks to look for and catch possible failures early
    before they become outright failures.

    Of course. It is still possible for a drive to fail suddenly and completely , isn't it ?

    Very much so if the drive electronics fails. I did once change a
    winchesters PCB to get it working again.

    --
    New Socialism consists essentially in being seen to have your heart in
    the right place whilst your head is in the clouds and your hand is in
    someone else's pocket.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E. R.@21:1/5 to The Natural Philosopher on Wed Dec 6 22:06:11 2023
    On 2023-12-06 12:45, The Natural Philosopher wrote:
    On 05/12/2023 22:45, Spiros Bousbouras wrote:
    On Sat, 2 Dec 2023 15:40:12 -0500
    John-Paul Stewart <jpstewart@personalprojects.net> wrote:
    On 2023-12-02 07:44, Spiros Bousbouras wrote:
    I mean use  badblocks  either directly or through the  -c  option
    of  mkfs
    and similar commands.

    My understanding is that , if a sector of a drive seems to go bad ,
    the drive
    software automatically "remaps" it (or whatever the appropriate verb
    is) to
    a different sector so does running  badblocks  actually buy you
    anything ?

    Badblocks is a leftover from a bygone era.

    For the last 30 years or so hard drives have had spare sectors and
    remapped bad ones to good ones, transparently to the end-user.  Just
    like you said.

    Does "spare sectors" mean on top of the advertised capacity of the
    drive ? If
    yes , could a drive which is not full find more spare sectors by
    reducing its
    advertised capacity (and informing the operating system) ?

    It may inform the operating system, but never the purchaser.  The
    internet is full of drives that magically never have the capacity they
    were sold as.

    Not to my knowledge.

    It is full of people that do not realize that disk vendors, since 1985
    at least, use decimal measuring units of bytes (such ad MB), while the
    rest of computer industry used binary measuring units of bytes (such as
    MiB), but named them MB.

    The industry was late in recognizing that both system of units should
    have different names, so the confusion lasted long.

    --
    Cheers,
    Carlos E.R.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Carlos E. R. on Thu Dec 7 10:22:35 2023
    On 06/12/2023 21:06, Carlos E. R. wrote:
    On 2023-12-06 12:45, The Natural Philosopher wrote:
    On 05/12/2023 22:45, Spiros Bousbouras wrote:
    On Sat, 2 Dec 2023 15:40:12 -0500
    John-Paul Stewart <jpstewart@personalprojects.net> wrote:
    On 2023-12-02 07:44, Spiros Bousbouras wrote:
    I mean use  badblocks  either directly or through the  -c  option >>>>> of  mkfs
    and similar commands.

    My understanding is that , if a sector of a drive seems to go bad ,
    the drive
    software automatically "remaps" it (or whatever the appropriate
    verb is) to
    a different sector so does running  badblocks  actually buy you
    anything ?

    Badblocks is a leftover from a bygone era.

    For the last 30 years or so hard drives have had spare sectors and
    remapped bad ones to good ones, transparently to the end-user.  Just
    like you said.

    Does "spare sectors" mean on top of the advertised capacity of the
    drive ? If
    yes , could a drive which is not full find more spare sectors by
    reducing its
    advertised capacity (and informing the operating system) ?

    It may inform the operating system, but never the purchaser.  The
    internet is full of drives that magically never have the capacity they
    were sold as.

    Not to my knowledge.

    It is full of people that do not realize that disk vendors, since 1985
    at least, use decimal measuring units of bytes (such ad MB), while the
    rest of computer industry used binary measuring units of bytes (such as
    MiB), but named them MB.

    The industry was late in recognizing that both system of units should
    have different names, so the confusion lasted long.

    That is another issue. I am talking about '64GB memory sticks' that have
    <4GB capacity.



    --
    Renewable energy: Expensive solutions that don't work to a problem that
    doesn't exist instituted by self legalising protection rackets that
    don't protect, masquerading as public servants who don't serve the public.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From vamastah@21:1/5 to The Natural Philosopher on Fri Dec 8 22:04:42 2023
    On Thu, 7 Dec 2023 10:22:35 +0000
    The Natural Philosopher <tnp@invalid.invalid> wrote:

    On 06/12/2023 21:06, Carlos E. R. wrote:
    On 2023-12-06 12:45, The Natural Philosopher wrote:
    On 05/12/2023 22:45, Spiros Bousbouras wrote:
    On Sat, 2 Dec 2023 15:40:12 -0500
    John-Paul Stewart <jpstewart@personalprojects.net> wrote:
    On 2023-12-02 07:44, Spiros Bousbouras wrote:
    ...

    ...

    Does "spare sectors" mean on top of the advertised capacity of
    the drive ? If
    yes , could a drive which is not full find more spare sectors by
    reducing its
    advertised capacity (and informing the operating system) ?

    It may inform the operating system, but never the purchaser.  The
    internet is full of drives that magically never have the capacity
    they were sold as.

    Not to my knowledge.

    ...

    That is another issue. I am talking about '64GB memory sticks' that
    have <4GB capacity.


    You can easily buy them on AliExpress, just search for 2 TB pendrives
    at the cost of $5. What's funny, everybody is happy with them, rating
    is usually pretty high, i.e. over 4.5. When I bought one, I had no
    heart to inform people they were scammed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Computer Nerd Kev@21:1/5 to vamastah on Sat Dec 9 08:13:38 2023
    vamastah <szymoraw@wp.pl> wrote:
    On Thu, 7 Dec 2023 10:22:35 +0000
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    That is another issue. I am talking about '64GB memory sticks' that
    have <4GB capacity.

    You can easily buy them on AliExpress, just search for 2 TB pendrives
    at the cost of $5. What's funny, everybody is happy with them, rating
    is usually pretty high, i.e. over 4.5. When I bought one, I had no
    heart to inform people they were scammed.

    I bought a 128MB (yes "MB") one off there once and even that died
    completely when I attempted the first write (of a tiny text file).
    I did get a refund easily though.

    --
    __ __
    #_ < |\| |< _# | Note: I won't see posts made from Google Groups |

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)