• Copying a number of files one-by-one onto an USB stick slows down to a

    From R.Wieser@21:1/5 to All on Wed Jan 12 12:28:35 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Hello all,

    I've got a simple program using CopyFile (kernel32) on an XPsp3 machine to
    copy a number of files to a 8GByte, FAT32 formatted USB stick, and notice
    that the whole thing slows down to a crawl (about 5 GByte consisting of
    50.000 files in over eight hours). I've also tried a another, different USB stick and see the same thing happen.

    Windows task-manager shows my programs procesor usage as zero percent.

    Remark: I've used the same program to copy even larger quantities of files
    to an USB harddisk, and have not experienced any kind of slowdown there.

    I know that an USB stick is rather slow in comparision to a harddisk, but
    the above is just ludicrous. :-(

    Does anyone have an idea what is going on and how to speed the whole thing
    up ?

    Regards,
    Rudy Wieser

    P.s.
    I remember having had similar problems while drag-and-dropping the source folder onto the USB stick. IOW, not a problem with the program itself.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to R.Wieser on Wed Jan 12 07:42:24 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 1/12/2022 6:28 AM, R.Wieser wrote:
    Hello all,

    I've got a simple program using CopyFile (kernel32) on an XPsp3 machine to copy a number of files to a 8GByte, FAT32 formatted USB stick, and notice that the whole thing slows down to a crawl (about 5 GByte consisting of 50.000 files in over eight hours). I've also tried a another, different USB stick and see the same thing happen.

    Windows task-manager shows my programs procesor usage as zero percent.

    Remark: I've used the same program to copy even larger quantities of files
    to an USB harddisk, and have not experienced any kind of slowdown there.

    I know that an USB stick is rather slow in comparision to a harddisk, but
    the above is just ludicrous. :-(

    Does anyone have an idea what is going on and how to speed the whole thing
    up ?

    Regards,
    Rudy Wieser

    P.s.
    I remember having had similar problems while drag-and-dropping the source folder onto the USB stick. IOW, not a problem with the program itself.

    I don't have an answer, for what part of the system is screwing up.

    As a workaround, try to package the files with 7-ZIP instead.

    https://www.7-zip.org/

    You can request 7-ZIP to tar the files, and it also has a
    segmentation function, so can ensure that output files are less than
    4GB to beat the FAT32 limitations on file size for the archive file.
    There is a place in the GUI interface to set segment size.

    some.zip \
    some.zip1 \___ (Bad illustration of beating 4GB file size
    some.zip2 / limit of FAT32... You probably know the file
    extension for ZIP better than I do :-) )

    Do the compression or packaging step on the hard drive. Then
    copy across one or more of the large segment files to the USB stick.

    That will give a better idea of the speed of the stick. Large
    files will do the fewest directory updates and display more of
    the sustained transfer rate of the device.

    The reason you don't want to point 7-ZIP at the stick directly
    for the output step, is to not give it any excuses this time.

    7-ZIP can be used later, to do random access within the archive
    on the stick, if so desired. If you need NOTE.txt from the 50,000
    files, you can get it without unpacking the entire archive.

    By preparing the archive file on the hard drive first, you'll be
    doing the largest flash block copies possible that way.

    You might also want to inspect the file system of the destination
    device and make sure the cluster size is 32KB or whatever, rather
    than just 512 bytes. An accidental too-small cluster size will
    do stuff like that. Flash page sizes are pretty large, and asking
    a USB stick to do things in 512 byte clusters, brings out the worst (read-modify-write at the physical level) in it. That burns up
    the flash write life, like crazy. The 32KB cluster size choice,
    is getting closer in powers-of-2, to the true page size in the hardware.

    This command, agrees with the value displayed when I select "format"
    of the associated USB stick partition letter.

    wmic volume get driveletter,blocksize

    You can add EXFAT support to Windows XP, via an optional download
    and install. EXFAT supports stuff like a 1MB cluster or so, and
    EXFAT was designed with Flash storage devices in mind. But again,
    there's really no excuse for the poor performance. EXFAT is native
    to Windows 7, no installing it as an addon there.

    I could use a Sandisk Extreme or Sandisk Extreme Pro and this transfer
    would be finished in two minutes, tops.

    You can put USB3 on a WinXP era machine, using a plugin card with
    a NEC USB3 chip on it. They still sell PCIe versions of those
    (a poster bought one a month ago). The NEC chip was one of the few
    to include WinXP driver support.

    The PCI card version of such things (100MB/sec limit) will be
    very hard to find. The bridge chip company got bought up and crushed,
    and bridge chips might cost 5x as much as they used to, which has
    caused distortion of the computer industry products in response.
    This is why they don't put bifurcation logic on motherboards any more.
    One company made that happen, via piggish practice.

    If you wanted USB3 on your Windows XP machine, and it only had PCI
    slots, you needed to be upgrading around ten years ago. That
    was back when the PCI to PCIe x1 bridge chip might have been
    five bucks or so. And then the bridge chip feeds the NEC chip,
    on the PCI card.

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mayayana@21:1/5 to R.Wieser on Wed Jan 12 08:27:22 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    "R.Wieser" <address@not.available> wrote

    | I've got a simple program using CopyFile (kernel32) on an XPsp3 machine to
    | copy a number of files to a 8GByte, FAT32 formatted USB stick, and notice
    | that the whole thing slows down to a crawl (about 5 GByte consisting of
    | 50.000 files in over eight hours). I've also tried a another, different
    USB
    | stick and see the same thing happen.
    |

    I don't know any solution. I have noticed that XP doesn't
    do USB well. Maybe because it's USB2? Win7 is much faster,
    with particular effort. I assume the problem is the USB drivers.
    The only other thing I can think of would be if you were using
    some kind of AV program. Most of those seem to default to
    scanning everything you touch.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Wed Jan 12 15:00:01 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Mayayana,

    Maybe because it's USB2?

    :-) Thats why I mentioned that I also tried an USB HD drive. If it would
    be the USB connection itself it would also be slow. It isn't. When I
    tried just now it copied the same thing in about 7 minutes. Thats over *60 times* faster. :-|

    The only other thing I can think of would be if you were
    using some kind of AV program.

    In that case I should have seen it work its ass off in the task manager.
    Which I didn't. 0% usage by any program, 99% idle time.

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Wed Jan 12 14:52:54 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Paul,

    Thanks for the suggestions.

    The reason why I copy files is that I can than access them using any kind of filebrowser - even a simple one which doesn't understand the concept of ZIP folders.

    As for the cluster size, the second USB stick was new, and as such I assume
    it was pre-formatted with the optimum block size. Also, it was NTFS (I had hoped it would make a difference, but it looks like it doesn't)

    And there is a problem with that suggestion : Years ago, when I thought of
    the same (micro-SD connected to a DOS 'puter), I was unable to get that
    optimum from the USB stick itself. No matter which drive interrogation
    function I looked at, none of them returned the size of the FLASH chips
    memory blocks . :-(

    I could use a Sandisk Extreme or Sandisk Extreme Pro and this transfer
    would be finished in two minutes, tops.

    I assume my (rather old) machine has USB 2.0 tops, but even there a copy to
    an USB HD takes just 7 minutes.

    If you wanted USB3 on your Windows XP machine,

    :-) Yup.

    But the question is not about the speed of the USB connection, but why the
    (USB attached) memory-stick is so much slower (as in: more than 60 times)
    than the (USB attached) HD.

    I'll re-look into that cluster size though. Who knows ...

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Char Jackson@21:1/5 to All on Wed Jan 12 11:41:47 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On Wed, 12 Jan 2022 12:28:35 +0100, "R.Wieser" <address@not.available>
    wrote:

    Hello all,

    I've got a simple program using CopyFile (kernel32) on an XPsp3 machine to >copy a number of files to a 8GByte, FAT32 formatted USB stick, and notice >that the whole thing slows down to a crawl (about 5 GByte consisting of >50.000 files in over eight hours). I've also tried a another, different USB >stick and see the same thing happen.

    Windows task-manager shows my programs procesor usage as zero percent.

    Remark: I've used the same program to copy even larger quantities of files
    to an USB harddisk, and have not experienced any kind of slowdown there.

    I know that an USB stick is rather slow in comparision to a harddisk, but
    the above is just ludicrous. :-(

    Does anyone have an idea what is going on and how to speed the whole thing
    up ?

    My totally unscientific observation is that writing to a USB2 device
    starts out fast, and when it inevitably slows to a crawl I notice that
    the device is blazing hot. If I allow it to cool down, the high write
    speed returns until it gets hot again. Is there a correlation between
    write speed and heat? *shrug* I don't know, but I can consistently
    duplicate that behavior here on my devices.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sjouke Burry@21:1/5 to R.Wieser on Wed Jan 12 18:20:17 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 12.01.22 12:28, R.Wieser wrote:
    Hello all,

    I've got a simple program using CopyFile (kernel32) on an XPsp3 machine to copy a number of files to a 8GByte, FAT32 formatted USB stick, and notice that the whole thing slows down to a crawl (about 5 GByte consisting of 50.000 files in over eight hours). I've also tried a another, different USB stick and see the same thing happen.

    Windows task-manager shows my programs procesor usage as zero percent.

    Remark: I've used the same program to copy even larger quantities of files
    to an USB harddisk, and have not experienced any kind of slowdown there.

    I know that an USB stick is rather slow in comparision to a harddisk, but
    the above is just ludicrous. :-(

    Does anyone have an idea what is going on and how to speed the whole thing
    up ?

    Regards,
    Rudy Wieser

    P.s.
    I remember having had similar problems while drag-and-dropping the source folder onto the USB stick. IOW, not a problem with the program itself.


    Dump the files in a zip archive(is quite fast), and
    put the zip file on the stick.
    That also increases the life of the stick.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Thu Jan 13 10:53:55 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Char,

    My totally unscientific observation is that writing to a USB2 device
    starts out fast, and when it inevitably slows to a crawl I notice
    that the device is blazing hot.

    I tried checking the sticks temperature (removed the "sleeve") and regulary touched both the processor as well as the flash chip on the other side. I
    would not call it hot by any means, just a bit warm.

    If I allow it to cool down,

    I have to try to add a keystroke to halt the copying temporarily and see
    what happens.

    (@all)

    And by the way, it looks like the slowing-down is not gradually (as I would expect when a device gets hotter), but suddenly. It took 45 minutes to
    copy about 4 GByte, and after that each file takes 6 seconds.

    Odd to say the least ...

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to R.Wieser on Thu Jan 13 08:18:59 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 1/13/2022 4:53 AM, R.Wieser wrote:
    Char,

    My totally unscientific observation is that writing to a USB2 device
    starts out fast, and when it inevitably slows to a crawl I notice
    that the device is blazing hot.

    I tried checking the sticks temperature (removed the "sleeve") and regulary touched both the processor as well as the flash chip on the other side. I would not call it hot by any means, just a bit warm.

    If I allow it to cool down,

    I have to try to add a keystroke to halt the copying temporarily and see
    what happens.

    (@all)

    And by the way, it looks like the slowing-down is not gradually (as I would expect when a device gets hotter), but suddenly. It took 45 minutes to
    copy about 4 GByte, and after that each file takes 6 seconds.

    Odd to say the least ...

    Regards,
    Rudy Wieser



    I bet the problem does not occur if you use Robocopy or
    if you do the copy from a Command Prompt window.

    It might be a side-effect of how File Explorer handles
    the file records it is reading. All it takes is around
    50,000 files in a folder to "confuse" the File Explorer.

    I discovered this, by converting movies to a series of
    JPG files (for usage with AVIdemux 2.5). Attempts to
    edit the movie, by removing the first thousand files and
    deleting them in the trash, resulted in the other 49,000
    not refreshing properly in the File Explorer window.

    On Windows XP, Robocopy can be installed via a download.

    On Win7 it would be a built-in command. Each OS has a
    different version of Robocopy, with slight differences in
    parameters to be passed.

    Robocopy is a folder copying program, so it expects
    two folders as arguments.

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Thu Jan 13 17:16:35 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Paul,

    I bet the problem does not occur .... if you do the copy from a Command Prompt window.

    Actually, my program is console based. :-)

    On Windows XP, Robocopy can be installed via a download.

    I've heard of RoboCopy. But alas, I have much more fun trying to create my
    own programs for it (as mentioned, using the CopyFile function outof
    kernel32).

    Also, as mentioned, the problem seems to be related to the USB stick, as
    doing the same copy to an USB (spinning rust) disk doesn't show any kind of slowdown problems.

    Today I've been testing a number different things, including throwing quite
    a number of files away and restart the copying (which skips files that are already there). Alas, once the copying starts to get slow (a steady 6
    seconds between files) it stays that way. Not even taking the stick out and letting it "rest" for half an hour (and than restart). A reformat (of the "quick" type) does allow it to become fast again though.

    By the way, I said a second, NTFS formatted stick also got slow. I
    just/might/ have been a bit too hasty there. Although /much/ slower than
    the USB HD this morning it finished a copy in about 50 minutes.

    I've just reformatted the origional stick to NTFS and have started another copy. I'm currently waiting to see how it goes ...

    ....

    Well, I've aborted the copy after about an hour, with just 1.6 GByte (outof
    5 GByte) having been copied. As such it does /much/ worse than the second stick.

    I'm currently assuming that I was just unlucky enough to have bought a wonky USB stick. I think the trash is the correct place for it.


    Just why is Murphy so nasty to me ? The last time I assumed that I did my homework it turned out that I forgot something, and now I'm trying to put
    the blame on me (my program not doing its thing) it turns out to be
    something I have zero control over ...

    Oh wait, IIRC that is one of Murpy's other rules. :-)

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Char Jackson@21:1/5 to All on Thu Jan 13 12:01:47 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On Thu, 13 Jan 2022 10:53:55 +0100, "R.Wieser" <address@not.available>
    wrote:

    Char,

    My totally unscientific observation is that writing to a USB2 device
    starts out fast, and when it inevitably slows to a crawl I notice
    that the device is blazing hot.

    I tried checking the sticks temperature (removed the "sleeve") and regulary >touched both the processor as well as the flash chip on the other side. I >would not call it hot by any means, just a bit warm.

    Thanks for checking. On at least two of my USB2 sticks, they get too hot
    to touch if I do extended writes, but it sounds like you've disproved my theory.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Herbert Kleebauer@21:1/5 to R.Wieser on Thu Jan 13 18:37:29 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 13.01.2022 17:16, R.Wieser wrote:

    Well, I've aborted the copy after about an hour, with just 1.6 GByte (outof
    5 GByte) having been copied. As such it does/much/ worse than the second stick.

    What is the setting for "Enfernungsrichtlinie" in properties of the disk?

    [x] Schnelles Enfernen (Standard)
    Deaktiviert den Schreibcache auf dem Datenträger und in
    Window. Das Gerät kann jedoch auch ohne das Symbol "Sicheres
    Entfernen" sicher entfert werden.

    [ ] Bessere Leistung
    Aktiviert den Schreibcache in Windows ....

    Did you try to copy 1 byte files. Maybe without cache the writing
    of the directory for each file slows down the copy more than the
    data transfer itself.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Thu Jan 13 21:56:34 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Char,

    Thanks for checking.

    Thank you for mentioning it. I had not considered it as a possible cause.

    On at least two of my USB2 sticks, they get too hot to touch
    if I do extended writes, but it sounds like you've disproved
    my theory.

    Too many different companies making them. I've had a a few of a brand which would crap out after just a few uses. On the other hand I've still got a 1 GByte stick which got weekly use and now is over a decade old.

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Thu Jan 13 21:35:08 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Herbert,

    What is the setting for "Enfernungsrichtlinie" in properties of the disk?

    When the memory stick was FAT32 formatted stick it was the first. Later I formatted it to NTFS, which a choice you only get when the second is
    enabled.

    In short, I have had both for that stick. Didn't make much of a difference.

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charlie Gibbs@21:1/5 to R.Wieser on Fri Jan 14 01:24:43 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 2022-01-13, R.Wieser <address@not.available> wrote:

    Herbert,

    What is the setting for "Enfernungsrichtlinie" in properties of the disk?

    When the memory stick was FAT32 formatted stick it was the first. Later I formatted it to NTFS, which a choice you only get when the second is
    enabled.

    In short, I have had both for that stick. Didn't make much of a difference.

    I had a thumb drive formatted as NTFS for quite some time. The performance really sucked, but I stuck with it because FAT32 only stores time stamps to
    the nearest 2 seconds, making time stamp comparison (and makefiles) problematic.

    However, as you noted earlier, I also noticed that copying large numbers of small files to a flash drive runs like molasses through a pinhole in January, so I resorted to the solution that has already been mentioned here: zip the files on your hard drive and then copy the .zip file to the flash drive.
    Since zip files preserve time stamps to the second, there was no longer
    any reason not to revert the flash drive to FAT32 and get even better speed.

    Zipping directly to the flash drive causes many small work files to be
    created and scratched on it, again resulting in abominable performance.

    --
    /~\ Charlie Gibbs | Microsoft is a dictatorship.
    \ / <cgibbs@kltpzyxm.invalid> | Apple is a cult.
    X I'm really at ac.dekanfrus | Linux is anarchy.
    / \ if you read it the right way. | Pick your poison.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ant@21:1/5 to cgibbs@kltpzyxm.invalid on Thu Jan 13 20:52:35 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    In alt.windows7.general Charlie Gibbs <cgibbs@kltpzyxm.invalid> wrote:
    On 2022-01-13, R.Wieser <address@not.available> wrote:

    Herbert,

    What is the setting for "Enfernungsrichtlinie" in properties of the disk?

    When the memory stick was FAT32 formatted stick it was the first. Later I formatted it to NTFS, which a choice you only get when the second is enabled.

    In short, I have had both for that stick. Didn't make much of a difference.

    I had a thumb drive formatted as NTFS for quite some time. The performance really sucked, but I stuck with it because FAT32 only stores time stamps to the nearest 2 seconds, making time stamp comparison (and makefiles) problematic.

    What about exFAT? NTFS is a problem to use on devices that don't know it.
    --
    Slammy new week as expected. Lots of spams again! 2022 isn't any better and different so far. :(
    Note: A fixed width font (Courier, Monospace, etc.) is required to see this signature correctly.
    /\___/\ Ant(Dude) @ http://aqfl.net & http://antfarm.home.dhs.org.
    / /\ /\ \ Please nuke ANT if replying by e-mail.
    | |o o| |
    \ _ /
    ( )

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Fri Jan 14 09:57:44 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Charlie,

    but I stuck with it because FAT32 only stores time stamps to
    the nearest 2 seconds, making time stamp comparison (and
    makefiles) problematic.

    I encountered a similar problem : even a just-copied file could be a number
    of seconds different from the origional. As I'm not in the habit of running backups while working on my machine I decided to allow for a 10 second difference in timestamps.

    That, and problems with summer/wintertime differences. :-\

    However, as you noted earlier, I also noticed that copying large numbers
    of small files to a flash drive runs like molasses through a pinhole in January,

    Hmmm... Although comperativily it takes an USB stick quite a bit longer
    than a USB HD, I do not really see any slow-down over the duration of the copying itself.

    so I resorted to the solution that has already been mentioned here: zip
    the files on your hard drive and then copy the .zip file to the flash
    drive.

    I also considered that, but that would have thrown quite a spanner in what
    my program was actually build for : checking for changed files and updating/deleting them on the copy.

    Also, I wanted to be sure that regardless of how much space I had left on
    the source I would always be able to make a backup or update it.

    Next to that I also considered the possibility of a write or read error (bad sector) could trash a compressed file, with little-to-no chance of recovery.

    Zipping directly to the flash drive causes many small work files to be created and scratched on it, again resulting in abominable performance.

    Not only zipping to an USB stick. I also noticed a much worse performance when extracting from a ZIP stored on an USB stick.

    But yes, the effective need for an intermediate, to-be-transferred ZIP file
    is what I ment with my "how much space I had left on the source" remark.

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Slootweg@21:1/5 to Wieser on Fri Jan 14 15:16:03 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Wieser <address@not.available> wrote:
    Herbert,

    What is the setting for "Enfernungsrichtlinie" in properties of the disk?

    Google Translate broken!? :-) (Yes, GT can handle Herbert's small spelling error, "Entfernungsrichtlinie".)

    It means "removal policy".

    <https://translate.google.com/?sl=de&tl=en&text=Enfernungsrichtlinie&op=translate>

    But AFAIK/AFAICT, USB memory sticks do not have a (fast removal versus
    fast performance) removal policy, at least they don't on my Windows 8.1
    system.

    (Probably USB *disks* have a (fast removal versus fast performance)
    removal policy, but I haven't checked.)

    [...]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Char Jackson@21:1/5 to All on Fri Jan 14 10:37:36 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On Fri, 14 Jan 2022 09:57:44 +0100, "R.Wieser" <address@not.available>
    wrote:

    Charlie,

    but I stuck with it because FAT32 only stores time stamps to
    the nearest 2 seconds, making time stamp comparison (and
    makefiles) problematic.

    I encountered a similar problem : even a just-copied file could be a number >of seconds different from the origional. As I'm not in the habit of running >backups while working on my machine I decided to allow for a 10 second >difference in timestamps.

    That, and problems with summer/wintertime differences. :-\

    However, as you noted earlier, I also noticed that copying large numbers
    of small files to a flash drive runs like molasses through a pinhole in
    January,

    Hmmm... Although comperativily it takes an USB stick quite a bit longer
    than a USB HD, I do not really see any slow-down over the duration of the >copying itself.

    so I resorted to the solution that has already been mentioned here: zip
    the files on your hard drive and then copy the .zip file to the flash
    drive.

    I also considered that, but that would have thrown quite a spanner in what
    my program was actually build for : checking for changed files and >updating/deleting them on the copy.

    Also, I wanted to be sure that regardless of how much space I had left on
    the source I would always be able to make a backup or update it.

    Next to that I also considered the possibility of a write or read error (bad >sector) could trash a compressed file, with little-to-no chance of recovery.

    One of the reasons that I like to use containers, such as zip or rar, is
    that data corruption is much easier to spot. With bare files that aren't
    in a container, bit rot can go undetected for a long period of time and
    once found, it can be very difficult to correct.

    My main tool for detecting bit rot, and then *correcting* it, is
    Quickpar. http://www.quickpar.org.uk/
    I use it almost daily and highly recommend it. It works equally well on non-containerized files, but I mostly use it with multipart rar files.

    <snip>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Fri Jan 14 17:58:36 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Frank,

    But AFAIK/AFAICT, USB memory sticks do not have a (fast removal versus
    fast performance) removal policy, at least they don't on my Windows 8.1 system.


    On my XPsp3 machine I can choose either. Though when I set it to "fast removal" I can't select NTFS when formatting (and vise-verse). Which, long ago, confused the heck outof me. :-o

    Regards,
    Rudy Wieser

    P.s.
    The way you quoted it made it look as if /I/ posted that "what if" line ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Fri Jan 14 18:54:24 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Char,

    One of the reasons that I like to use containers, such as zip or rar,
    is that data corruption is much easier to spot.

    You mean, the first time you try to use /any/ file outof it it barfs and refuses to let you access most, if not all of its contained files ? :-|

    With bare files that aren't in a container, bit rot can go undetected
    for a long period ...

    True. But than I only lose the affected file(s), not the whole (or
    many/most of) the container.

    ... and once found, it can be very difficult to correct.

    Harder than fixing an bit-rotted container ? :-p

    But granted, the sooner you become aware of a problem with a backup the
    better.

    My main tool for detecting bit rot, and then *correcting* it,
    is Quickpar.

    :-) You make it sound as if you can just throw a file at it and it will
    detect and correct bitrot in it. Which is ofcourse not quite the way it
    works.

    Though granted, pulling the files thru such a program (which adds detecting/recovery information to the file) before saving them to a backup
    does enable the files to survive a decent amount of bitrot or other similary small damage.

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Herbert Kleebauer@21:1/5 to Frank Slootweg on Fri Jan 14 19:09:02 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 14.01.2022 16:16, Frank Slootweg wrote:

    But AFAIK/AFAICT, USB memory sticks do not have a (fast removal versus fast performance) removal policy, at least they don't on my Windows 8.1 system.


    in explorer right-click on the external usb drive
    select "Properties"
    on the Hardware tab, select your device and select "Properties"
    select "Change Settings" on the General tab.
    select the "Policies" tab

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Slootweg@21:1/5 to R.Wieser on Fri Jan 14 19:26:56 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    R.Wieser <address@not.available> wrote:
    Frank,

    [...]

    P.s.
    The way you quoted it made it look as if /I/ posted that "what if" line ...

    Sorry, a case of sloppy reading! I thought you didn't understand
    Herbert's German text.

    I will now crawl back under my rock. :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Slootweg@21:1/5 to Herbert Kleebauer on Fri Jan 14 19:26:56 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Herbert Kleebauer <klee@unibwm.de> wrote:
    On 14.01.2022 16:16, Frank Slootweg wrote:

    But AFAIK/AFAICT, USB memory sticks do not have a (fast removal versus fast performance) removal policy, at least they don't on my Windows 8.1 system.


    in explorer right-click on the external usb drive
    select "Properties"
    on the Hardware tab, select your device and select "Properties"
    select "Change Settings" on the General tab.
    select the "Policies" tab

    Thanks!

    I could remember that functionality, but could not find it again.

    In hindsight, I apparently overlooked several things, both on the
    'Hardware' tab and on the second level 'Properties' popup/window.

    So thanks again.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Char Jackson@21:1/5 to All on Fri Jan 14 13:53:54 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On Fri, 14 Jan 2022 18:54:24 +0100, "R.Wieser" <address@not.available>
    wrote:

    Char,

    One of the reasons that I like to use containers, such as zip or rar,
    is that data corruption is much easier to spot.

    You mean, the first time you try to use /any/ file outof it it barfs and >refuses to let you access most, if not all of its contained files ? :-|

    I suppose so, but I'd say that's preferable to the alternative, which is
    not knowing. With individual files, I suppose each file would have to be
    loaded in some way, checking for obvious corruption or error messages,
    or in the case of system files like .dll and .exe just give it your best
    shot, but all of the major container types give you an easy method to
    detect corruption of the container itself. That's a huge benefit.

    With bare files that aren't in a container, bit rot can go undetected
    for a long period ...

    True. But than I only lose the affected file(s), not the whole (or
    many/most of) the container.

    You don't actually have to lose any data. Quickpar can usually recover
    that stuff, if it's used correctly.

    ... and once found, it can be very difficult to correct.

    Harder than fixing an bit-rotted container ? :-p

    Good point. Much harder to spot damage in individual files, but once
    spotted, probably equally easy to correct if you've planned ahead.

    But granted, the sooner you become aware of a problem with a backup the >better.

    My main tool for detecting bit rot, and then *correcting* it,
    is Quickpar.

    :-) You make it sound as if you can just throw a file at it and it will >detect and correct bitrot in it. Which is ofcourse not quite the way it >works.

    Right, you have to plan ahead. You have to ask yourself, are these files something that I care about? If so, how much damage do I want to be
    prepared to correct? The answer to the second question, using Quickpar,
    can be anywhere from 0% to 100%, but it's a decision that has to be made
    before any damage occurs.

    Though granted, pulling the files thru such a program (which adds >detecting/recovery information to the file) before saving them to a backup >does enable the files to survive a decent amount of bitrot or other similary >small damage.

    Quickpar creates additional files (parity files) that you can use to
    detect and correct damage. The actual files that you care about are not
    touched in any way unless and until it becomes necessary to repair them.
    You also don't have to store the parity files alongside the files that
    they're protecting. You can, of course, but you can also store them, or
    a second copy of them since they're relatively small, in another
    location.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Fri Jan 14 23:00:46 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Char,

    I suppose so, but I'd say that's preferable to the alternative, which
    is not knowing

    Hmmm... not knowning versus losing everything. I think I would go for not knowing.

    I suppose each file would have to be loaded in some way
    ...
    but all of the major container types give you an easy method to
    detect corruption of the container itself

    Which happens by reading the whole container and than doing some checking magic. A difference between lots of small parts versus one big blob.

    Though the multiple small parts would need a seperate file to store all the checksums. Either that, or add them as an alternate data stream to the
    files themselves (which ofcourse would need the storage to be NTFS or comparable in this regard)

    Right, you have to plan ahead. You have to ask yourself, are these
    files something that I care about?

    Well, that is why I'm making backups of them, aren't I ? :-)

    If so, how much damage do I want to be prepared to correct?

    I'm not prepared to repair anything.

    On the other hand, I'm making multiple backups of stuff thats important to
    me. If one even /hints/ of failing I make a new copy from it (or from one
    of the others) - and compare it against one of the others to verify its integrity.

    Quickpar creates additional files

    Ah yes, ofcourse. For some reason I was thinking of the origional data
    being wrapped in the parity/checksom/recovery data. Or that the data was stored in a before mentioned alternate data stream.

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Fri Jan 14 22:14:42 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Frank,

    Sorry, a case of sloppy reading! I thought you didn't understand
    Herbert's German text.

    I'm actually from his west-side next-door neighbour, the Netherlands.

    A lot of "nachsynchronisierter schaukasten films" (hope I wrote that right)
    in my youth and a year getting at school enabled me to read most of it.

    I will now crawl back under my rock. :-)

    Ah, you do not need to go /that/ far... :-p

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Char Jackson@21:1/5 to All on Fri Jan 14 19:31:49 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On Fri, 14 Jan 2022 23:00:46 +0100, "R.Wieser" <address@not.available>
    wrote:

    Char,

    I suppose so, but I'd say that's preferable to the alternative, which
    is not knowing

    Hmmm... not knowning versus losing everything. I think I would go for not >knowing.

    That's a false choice, obviously. :-)

    The choice is knowing or not knowing. It's much, much easier to check a container (and by doing so, check all of its contents in one fell
    swoop), than to check individual files.

    It sounds like you have a solution that works for you, though, so it's
    all good.

    <snip>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Sat Jan 15 08:44:02 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Char,

    That's a false choice, obviously. :-)

    The choice is knowing or not knowing.

    I don't think so. But lets not pursue this further, shall we.

    It's much, much easier to check a container (and by doing so,
    check all of its contents in one fell swoop), than to check
    individual files.

    :-) Its not you doing the checking, but a program. And just as you can let your containerizing program do the checking for you so can I start my
    program to trawl thru all those files of mine and do the same.

    It sounds like you have a solution that works for you, though, so
    it's all good.

    I was thinking the same.

    Thanks for bringing your methods up though. It gave me other possibilities
    to consider. Which is always good.

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Steve Hayes@21:1/5 to All on Sat Jan 15 12:07:49 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On Wed, 12 Jan 2022 12:28:35 +0100, "R.Wieser" <address@not.available>
    wrote:

    I've got a simple program using CopyFile (kernel32) on an XPsp3 machine to >copy a number of files to a 8GByte, FAT32 formatted USB stick, and notice >that the whole thing slows down to a crawl (about 5 GByte consisting of >50.000 files in over eight hours). I've also tried a another, different USB >stick and see the same thing happen.

    I use a batch file to do something similar, in order to copy data
    files from my desktop computer (XP) to my laptop (Win7), and back
    again.

    Some of the files are compressed (ARJ or ZIP) and some are not.

    Sometimes, especially on the XP machine, the USB connection drops in
    the middle of a transfer, and then I need to run CHKDSK on the flash
    drive, otherwise it slows down or might lose data.

    Perhaps you could try running CHKDSK /f





    --
    Steve Hayes from Tshwane, South Africa
    Web: http://www.khanya.org.za/stevesig.htm
    Blog: http://khanya.wordpress.com
    E-mail - see web page, or parse: shayes at dunelm full stop org full stop uk

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Sat Jan 15 14:03:38 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Steve,

    Sometimes, especially on the XP machine, the USB connection drops in
    the middle of a transfer, and then I need to run CHKDSK on the flash
    drive, otherwise it slows down or might lose data.

    Thanks for the suggestion. Someone else mentioned that his stick heats up while writing and causes problems like I described. Maybe yours has the
    same problem ?

    After I posted my above question and not getting a "Thats a well-known
    problem" response I've done a number of tests. As it turns out the problem
    is that specific USB stick (see the further posts I made)

    As such I decided to effectivily bin it (I can still muck around with it,
    but it won't ever be used for storage again)

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Slootweg@21:1/5 to R.Wieser on Sat Jan 15 13:07:24 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    R.Wieser <address@not.available> wrote:
    Frank,

    Sorry, a case of sloppy reading! I thought you didn't understand
    Herbert's German text.

    I'm actually from his west-side next-door neighbour, the Netherlands.

    Yes, I know that, like me, you're a Dutchie. In the beginning your
    lastname had me thinking you were German, but that was a long time ago.

    A lot of "nachsynchronisierter schaukasten films" (hope I wrote that right) in my youth and a year getting at school enabled me to read most of it.

    I hate(d) those films. Earth to Germany: John Wayne isn't (wasn't) a
    German. And - in news programs - neither is Joe Biden. There's this
    thing called 'subtitles', you should try it some time! :-(

    I will now crawl back under my rock. :-)

    Ah, you do not need to go /that/ far... :-p

    Well, it's actually quite cosy! :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Sat Jan 15 18:11:55 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Frank,

    Yes, I know that, like me, you're a Dutchie. In the beginning your
    lastname had me thinking you were German, but that was a long time ago.

    Well, I've been told that my great-granddad (or one above that) came from Germany to try his luck here, and just never left. IOW, do seem to have
    some german blood in me.

    I hate(d) those films. Earth to Germany: John Wayne isn't (wasn't) a
    German. And - in news programs - neither is Joe Biden.

    nowerdays I have the same dislike towards televised movies. Not much of a problem with having some dubbed stuff coming thru the news though.

    There's this thing called 'subtitles', you should try it some time! :-(

    Gee, what an innovative idea. :-p

    But yes, I take that over audio dubbing any time. Than again, I've always found subtitles handy to support my understanding of what gets spoken.

    I will now crawl back under my rock. :-)

    Ah, you do not need to go /that/ far... :-p

    Well, it's actually quite cosy! :-)

    Well yeah, but it hurts my back when I have to bow down that deep to look at you when I want to talk to you. :-)

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brian Gregory@21:1/5 to R.Wieser on Sun Jan 16 02:23:05 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 12/01/2022 13:52, R.Wieser wrote:
    Paul,

    Thanks for the suggestions.

    The reason why I copy files is that I can than access them using any kind of filebrowser - even a simple one which doesn't understand the concept of ZIP folders.

    As for the cluster size, the second USB stick was new, and as such I assume it was pre-formatted with the optimum block size. Also, it was NTFS (I had hoped it would make a difference, but it looks like it doesn't)

    And there is a problem with that suggestion : Years ago, when I thought of the same (micro-SD connected to a DOS 'puter), I was unable to get that optimum from the USB stick itself. No matter which drive interrogation function I looked at, none of them returned the size of the FLASH chips memory blocks . :-(

    As a rule of thumb always format USB flash drives with 32K allocation
    blocks. A possible alternative is to remember what size blocks were used
    when it was brand new. By default Windoze tends to format USB flash with
    blocks that are much too small.

    --
    Brian Gregory (in England).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Sun Jan 16 09:20:43 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Brian,

    As a rule of thumb always format USB flash drives with 32K allocation
    blocks.

    Thanks. I've heard that number before, but never thought of it as an
    universal one (true for sticks of any make and size). I've always assumed that the flash-chips block size could/would change with the chips size.

    A possible alternative is to remember what size blocks were used when it
    was brand new.

    You mean that I have to store those values for all my different (and
    changing) sticks in a document somewhere (or a sticking it on the stick
    itself) ? Yeah, I thought about that too. But as I cannot even seem to
    be able to uniquely mark an individual stick (so I can look it up in a list) ...

    By default Windoze tends to format USB flash with blocks that are much too small.

    And there I was, assuming that MS would know more about this that I could
    ever hope to do, and therefore use some sensible default. :-(

    Thanks for the info.

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mayayana@21:1/5 to R.Wieser on Sun Jan 16 08:39:08 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    "R.Wieser" <address@not.available> wrote

    | > As a rule of thumb always format USB flash drives with 32K allocation
    | > blocks.
    |
    | Thanks. I've heard that number before, but never thought of it as an
    | universal one (true for sticks of any make and size). I've always
    assumed
    | that the flash-chips block size could/would change with the chips size.
    |

    It's not universal. Block size only relates to the limits
    of 4-byte integers. I don't know whether the counting is
    an unsigned or signed number, but the blocks have to
    be big enough that the number of them doesn't end up
    more than the limit of a long integer on a 32-bit system.







    | > A possible alternative is to remember what size blocks were used when it
    | > was brand new.
    |
    | You mean that I have to store those values for all my different (and
    | changing) sticks in a document somewhere (or a sticking it on the stick
    | itself) ? Yeah, I thought about that too. But as I cannot even seem to
    | be able to uniquely mark an individual stick (so I can look it up in a
    list)
    | ...
    |
    | > By default Windoze tends to format USB flash with blocks that are much
    too
    | > small.
    |
    | And there I was, assuming that MS would know more about this that I could
    | ever hope to do, and therefore use some sensible default. :-(
    |
    | Thanks for the info.
    |
    | Regards,
    | Rudy Wieser
    |
    |

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Sun Jan 16 15:25:46 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Mayayana,

    It's not universal. Block size only relates to the limits
    of 4-byte integers. I don't know whether the counting is
    an unsigned or signed number, but the blocks have to
    be big enough that the number of them doesn't end up
    more than the limit of a long integer on a 32-bit system.

    Are you sure ? I mean, why would a flash chip do anything with individual byte addressing over its whole range ?

    As far as I know the reading/writing of the flash chip is broken up in two parts : one command with an address to indicate which block needs to be read/written, and another command with its own address to read bytes
    from/write bytes to the flash-chips internal buffer.

    As long as that buffer is smaller than what the OS can handle everything is hunky-dory. Which might well be why the blocksize is limited to "just" 32 KByte.

    Regards,
    Rudy Wieser

    P.s.
    I just checked, and the largest "allocation size" I can choose is a mere
    4096 bytes ....

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to Mayayana on Sun Jan 16 09:29:09 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 1/16/2022 8:39 AM, Mayayana wrote:
    "R.Wieser" <address@not.available> wrote

    | > As a rule of thumb always format USB flash drives with 32K allocation
    | > blocks.
    |
    | Thanks. I've heard that number before, but never thought of it as an
    | universal one (true for sticks of any make and size). I've always assumed
    | that the flash-chips block size could/would change with the chips size.
    |

    It's not universal. Block size only relates to the limits
    of 4-byte integers. I don't know whether the counting is
    an unsigned or signed number, but the blocks have to
    be big enough that the number of them doesn't end up
    more than the limit of a long integer on a 32-bit system.

    I hoped to have two examples (using dead USB sticks),
    but one of them, the datasheet is untraceable.

    **********************************************
    Lexar S73 32GB (Not exactly fast)

    29f128g08cfaaa 16GBx2

    Open NAND Flash Interface (ONFI) 2.2-compliant
    Multiple-level cell (MLC) technology
    Organization
    Page size x8: 8640 bytes (8192 + 448 bytes) <=== the 448 bytes could be ECC Block size: 256 pages (2048K + 112K bytes)
    Plane size: 2 planes x 2048 blocks per plane
    Device size: 64Gb: 4096 blocks;
    128Gb: 8192 blocks; <===
    256Gb: 16,384 blocks;
    512Gb: 32,786 blocks
    Endurance: 5000 PROGRAM/ERASE cycles

    Array performance
    Read page: 50μs (MAX)
    Program page: 1300μs (TYP)
    Erase block: 3ms (TYP)

    Synchronous I/O performance
    Up to synchronous timing mode 5
    Clock rate: 10ns (DDR)
    Read/write throughput per pin: 200 MT/s **********************************************

    I would think a cluster size of 8K or larger
    would be a start, to reduce write amplification.

    Note that Windows 7 preparation of the stick, should
    align the beginning of the partition to 1MB (0x100000)
    boundaries. And by aligning the partition to binary-power-of-two,
    that avoids the OS trying to write "the end of one page and
    a bit onto the second page".

    Don't format the sticks on Windows XP, for fear or
    screwing up the alignment.

    Then do the experiment again.

    The second chip I wanted to research was this,
    but I got zero reasonable leads for this search.

    ***************************************
    This is a 128GB miniature USB flash stick, with only
    a single Ball Grid Array flash chip on it. Can't find a
    datasheet, to get a page size. The marking on the chip
    of "NW605" is not a part number, but seems to reference
    the MT29F part number below.

    Lexar S23-128 plastic barrel, 10MB/sec writes

    3XA22 NW605 (BGA)

    MT29F1T08CUEABH8-12:A
    NAND Flash 1TBIT 83MHZ 152LBGA

    This is one of those "really shouldn't be a USB3 product" designs.
    The plastic barrel for the USB stick was a disaster, allowing
    one of the USB3 high speed pins to snap off during insertion,
    and dooming the device to reading at USB2 rates. ***************************************

    I have other USB sticks that are still working, but on those
    the trick to disassembling them without damaging them is not obvious
    to me.

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Slootweg@21:1/5 to R.Wieser on Sun Jan 16 16:07:26 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    R.Wieser <address@not.available> wrote:
    P.s.
    I just checked, and the largest "allocation size" I can choose is a mere
    4096 bytes ....

    Maybe that's an XP limitation.

    FWIW, on my Windows 8.1 system for a 4GB SD-card (i.e. not an USB
    memory stick) the choices range from 1024 bytes to 32 kilobytes for
    FAT32 and from 512 bytes to 64 kilobytes for NTFS.

    What was your stick-size and type of filesystem again? (I lost track
    in this long thread.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Sun Jan 16 17:48:19 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Frank,

    What was your stick-size and type of filesystem again? (I lost
    track in this long thread.)

    Its an 8 GByte stick currently NTFS formatted.

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to Charlie Gibbs on Sun Jan 16 14:13:12 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 1/16/2022 1:42 PM, Charlie Gibbs wrote:
    On 2022-01-16, R.Wieser <address@not.available> wrote:

    By default Windoze tends to format USB flash with blocks that are much too >>> small.

    And there I was, assuming that MS would know more about this that I could
    ever hope to do, and therefore use some sensible default. :-(

    Sensible default? Microsoft? Thanks - I needed a laugh this morning.


    For formatting large FAT32 volumes (up to 2.2TB), the Ridgecrop
    fat32formatter can do that. This solves the ~32GB limit Microsoft
    has on fat32 partitions. I think there might be some cluster
    size options as well.

    https://web.archive.org/web/20200424145132/http://www.ridgecrop.demon.co.uk/index.htm?fat32format.htm

    https://web.archive.org/web/20200410224838if_/http://www.ridgecrop.demon.co.uk/download/fat32format.zip

    *******

    Windows 10 has some extra-large cluster size options for
    NTFS, but those are not backward compatible with NTFS
    on earlier OSes, and I can't really recommend using
    a 1MB cluster on Win10 and then not being able to
    read it on Windows 7. Clusters up to 64KB might be
    a bit more compatible across the OSes.

    *******

    4KB clusters are used on C: , so that Compression and
    EFS encryption will work. Windows 10 C: used to support
    having 64KB clusters on it. But after the third or
    fourth upgrade, it stopped "tolerating" that choice.
    I only discovered this by accident, by taking an older
    partition and installing Windows 10 in it... without
    reformatting first. Oops :-)

    Since you are not normally installing Windows 10 C: on
    a USB flash drive, this 4KB choice is not an issue.
    The Windows To Go thing, might have been the reason
    there was some room for non-standard clusters at first.
    Once WTG was removed as an option, that would make it
    easier for them to insist on 4KB clusters.

    https://docs.microsoft.com/en-us/windows/deployment/planning/windows-to-go-overview

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charlie Gibbs@21:1/5 to R.Wieser on Sun Jan 16 18:42:44 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 2022-01-16, R.Wieser <address@not.available> wrote:

    By default Windoze tends to format USB flash with blocks that are much too >> small.

    And there I was, assuming that MS would know more about this that I could ever hope to do, and therefore use some sensible default. :-(

    Sensible default? Microsoft? Thanks - I needed a laugh this morning.

    --
    /~\ Charlie Gibbs | Microsoft is a dictatorship.
    \ / <cgibbs@kltpzyxm.invalid> | Apple is a cult.
    X I'm really at ac.dekanfrus | Linux is anarchy.
    / \ if you read it the right way. | Pick your poison.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brian Gregory@21:1/5 to R.Wieser on Mon Jan 17 11:50:29 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 16/01/2022 16:48, R.Wieser wrote:
    Frank,

    What was your stick-size and type of filesystem again? (I lost
    track in this long thread.)

    Its an 8 GByte stick currently NTFS formatted.

    Regards,
    Rudy Wieser



    If you want it to work fast don't use NTFS. Use FAT or exFAT.

    --
    Brian Gregory (in England).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brian Gregory@21:1/5 to R.Wieser on Mon Jan 17 11:56:47 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 16/01/2022 08:20, R.Wieser wrote:
    Brian,

    As a rule of thumb always format USB flash drives with 32K allocation
    blocks.

    Thanks. I've heard that number before, but never thought of it as an universal one (true for sticks of any make and size). I've always assumed that the flash-chips block size could/would change with the chips size.

    It probably does change. But we're not needing an exact match to get a
    speed benefit.

    Having a big block on the drive divided into many small allocation
    blocks seems to be what can slow things down.

    --
    Brian Gregory (in England).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Mon Jan 17 17:17:16 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Brian,

    If you want it to work fast don't use NTFS. Use FAT or exFAT.

    I only reformatted to NTFS to check if the problem was perhaps files-system related. It doesn't seem to be.

    It probably does change. But we're not needing an exact match to get a
    speed benefit.

    Speed is one thing. Not needing reading-modify-write the same flash-block
    is another. And that will happen if the allocation size is not the same
    as, or a multiple of, the flash-block size.

    Having a big block on the drive divided into many small allocation blocks seems to be what can slow things down.

    :-) you or the stick needing to do multiple read-modify-write actions will /definitily/ slow the stick down more than a simple write action - even if
    you just divide the flash block in (an uneven) two.

    IOW, I do understand the mechanics behind it. Which is why I'm (still) a
    bit surprised you cannot ask the stick for its optimum allocation size ...

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike S@21:1/5 to R.Wieser on Mon Jan 17 23:26:18 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 1/15/2022 5:03 AM, R.Wieser wrote:
    Steve,

    Sometimes, especially on the XP machine, the USB connection drops in
    the middle of a transfer, and then I need to run CHKDSK on the flash
    drive, otherwise it slows down or might lose data.

    Thanks for the suggestion. Someone else mentioned that his stick heats up while writing and causes problems like I described. Maybe yours has the same problem ?

    After I posted my above question and not getting a "Thats a well-known problem" response I've done a number of tests. As it turns out the problem is that specific USB stick (see the further posts I made)

    As such I decided to effectivily bin it (I can still muck around with it,
    but it won't ever be used for storage again)

    Regards,
    Rudy Wieser

    Might be interesting to open it up and add a heat sink and see what
    happens.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Tue Jan 18 12:33:28 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Mike,

    Might be interesting to open it up and add a heat sink and see what
    happens.

    I thought of that too. But after I stopped the copying, ejected it, left it alone for an hour (letting it cool down), restarted the copy and than saw it was directly going back into its one-file-each-six-seconds rithm I didn't
    think that that would give me any more information.

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brian Gregory@21:1/5 to R.Wieser on Tue Jan 18 15:12:14 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 17/01/2022 16:17, R.Wieser wrote:
    Brian,

    If you want it to work fast don't use NTFS. Use FAT or exFAT.

    I only reformatted to NTFS to check if the problem was perhaps files-system related. It doesn't seem to be.

    It probably does change. But we're not needing an exact match to get a
    speed benefit.

    Speed is one thing. Not needing reading-modify-write the same flash-block
    is another. And that will happen if the allocation size is not the same
    as, or a multiple of, the flash-block size.

    Having a big block on the drive divided into many small allocation blocks
    seems to be what can slow things down.

    :-) you or the stick needing to do multiple read-modify-write actions will /definitily/ slow the stick down more than a simple write action - even if you just divide the flash block in (an uneven) two.

    IOW, I do understand the mechanics behind it. Which is why I'm (still) a
    bit surprised you cannot ask the stick for its optimum allocation size ...


    I think the flash used often has much larger pages than would make sense
    as an allocation block size.

    --
    Brian Gregory (in England).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brian Gregory@21:1/5 to R.Wieser on Tue Jan 18 15:16:36 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 12/01/2022 11:28, R.Wieser wrote:
    Hello all,

    I've got a simple program using CopyFile (kernel32) on an XPsp3 machine to copy a number of files to a 8GByte, FAT32 formatted USB stick, and notice that the whole thing slows down to a crawl (about 5 GByte consisting of 50.000 files in over eight hours). I've also tried a another, different USB stick and see the same thing happen.

    If you can try with XCOPY.
    XCOPY is good at optimizing the writes to the directory.
    I think it opens and creates all the files first, then copies the
    contents one by one, and finally closes all the files.

    --
    Brian Gregory (in England).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Tue Jan 18 17:42:08 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Brian,

    I think the flash used often has much larger pages than would make sense
    as an allocation block size.

    There is nothing much I can say to that, as its as unspecific as it comes
    ...

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to Brian Gregory on Tue Jan 18 13:04:09 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 1/18/2022 10:16 AM, Brian Gregory wrote:
    On 12/01/2022 11:28, R.Wieser wrote:
    Hello all,

    I've got a simple program using CopyFile (kernel32) on an XPsp3 machine to >> copy a number of files to a 8GByte, FAT32 formatted USB stick, and notice
    that the whole thing slows down to a crawl (about 5 GByte consisting of
    50.000 files in over eight hours).  I've also tried a another, different USB
    stick and see the same thing happen.

    If you can try with XCOPY.
    XCOPY is good at optimizing the writes to the directory.
    I think it opens and creates all the files first, then copies the contents one by one, and finally closes all the files.


    If the stick is wiped, and a partition created via Windows 7,
    that ensures proper cluster alignment for flash memory storage.
    Doesn't matter what file system you use, if the partitions
    start on 1MB boundaries, that should help a lot with aligning
    the clusters (which are power-of-two), with the flash hardware
    addressing (which is power-of-two favoring).

    If you cleaned the stick and prepared it on WinXP, then bits of
    the geometry are divisible by 63, and that is not a power of two
    number, and that doubles the amount of work for writes (assuming
    lots of small files, not just one big file).

    I've made this kind of error before with hard drives - prepped
    them with WinXP, then notice that "something is wrong" on Windows 7.
    And eventually, by accident, I might discover my alignment error.
    That preparing on Windows XP was a mistake. This is an issue with
    512e drives (512 virtual, 4K physical) and not an issue with
    512n drives (512 virtual, 512 physical).

    I have one drive on this machine right now, WD Gold 4TB, and it is
    one of the largest 512n drives you can get. And that means it
    doesn't have alignment issues internally (no edge cases, no
    read-modify-write). It was purchased in case I needed the drive for
    Windows XP, but my Windows XP machine is dead, so now I can use
    the 512n drive(s) I have for other things.

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From J. P. Gilliver (John)@21:1/5 to Paul on Tue Jan 18 18:31:00 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On Tue, 18 Jan 2022 at 13:04:09, Paul <nospam@needed.invalid> wrote (my responses usually follow points raised):
    []
    If you cleaned the stick and prepared it on WinXP, then bits of
    the geometry are divisible by 63, and that is not a power of two
    number, and that doubles the amount of work for writes (assuming
    lots of small files, not just one big file).
    []
    How did the 63 size come about? It seems a very odd choice in computing.
    --
    J. P. Gilliver. UMRA: 1960/<1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf

    Mike Jackson |\ _,,,---,,_
    and Squeak /,`.-'`' -. ;-;;,_ Shame there's no snooze button
    [1998] |,4- ) )-,_..;\ ( `'- on a cat who wants breakfast
    zzz '---''(_/--' `-'\_)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to All on Tue Jan 18 13:42:29 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 1/18/2022 1:31 PM, J. P. Gilliver (John) wrote:
    On Tue, 18 Jan 2022 at 13:04:09, Paul <nospam@needed.invalid> wrote (my responses usually follow points raised):
    []
    If you cleaned the stick and prepared it on WinXP, then bits of
    the geometry are divisible by 63, and that is not a power of two
    number, and that doubles the amount of work for writes (assuming
    lots of small files, not just one big file).
    []
    How did the 63 size come about? It seems a very odd choice in computing.

    It was a choice in CHS.

    https://en.wikipedia.org/wiki/Cylinder-head-sector

    63 sectors/track

    Windows 7 uses unconventional-looking values in the partition table.
    You can see these with ptedit32.exe . A utility you should have
    acquired back when it was easy to get. The FTP server it was on (for free), closed some time ago. And since archive.org does not archive FTP sites,
    the info is easily lost when files are hosted that way.

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to R.Wieser on Tue Jan 18 13:51:15 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 1/18/2022 11:42 AM, R.Wieser wrote:
    Brian,

    I think the flash used often has much larger pages than would make sense
    as an allocation block size.

    There is nothing much I can say to that, as its as unspecific as it comes
    ...

    Regards,
    Rudy Wieser

    8KB is a good value for a 32GB flash stick.

    If just means using a custom value, rather than accepting
    the default value.

    You can take a look through the Micron catalog for
    values on some TLC, but then, you won't know which
    of the chips is used for USB flash sticks. Presumably
    some of the flash is less well tested and suited
    to making USB flash sticks. As Micky would say "what
    do you expect for $1.50". Someone has to cater to that
    market, of "cheap as chips".

    I don't think they really throw anything away.
    There's always someone vending defective flash, somewhere.
    With the bazaar selling model, anything is sustainable.

    The TSOP or quad flatpack style chips, might be used
    in long-ish USB flash sticks. Wheres the fine pitch BGA
    could be used for the stubby USB flash sticks. I only have
    one stubby one here, and it sucks. That's the one where I
    couldn't find a datasheet for the chip.

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to Paul on Wed Jan 19 04:01:49 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 1/18/2022 1:51 PM, Paul wrote:
    On 1/18/2022 11:42 AM, R.Wieser wrote:
    Brian,

    I think the flash used often has much larger pages than would make sense >>> as an allocation block size.

    There is nothing much I can say to that, as its as unspecific as it comes
    ...

    Regards,
    Rudy Wieser

    8KB is a good value for a 32GB flash stick.


    Wikipedia says the page size is 4KB to 16KB.

    https://en.wikipedia.org/wiki/Flash_memory

    The pages can't be too large, because that
    would affect the performance of the chip.

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Wed Jan 19 11:38:40 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Paul,

    Wikipedia says the page size is 4KB to 16KB.

    Which, in short, means three choices : 4K, 8K or 16K , as those numbers need
    to be powers of two.

    Though that page also mentions something about an "erase page", which size
    can be multiple millions of bytes. And somehow that reads as it doesn't matter if your allocation size matches the page size (unless it matches the "erasure page" size) ...

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to R.Wieser on Wed Jan 19 07:44:53 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 1/19/2022 5:38 AM, R.Wieser wrote:
    Paul,

    Wikipedia says the page size is 4KB to 16KB.

    Which, in short, means three choices : 4K, 8K or 16K , as those numbers need to be powers of two.

    Though that page also mentions something about an "erase page", which size can be multiple millions of bytes. And somehow that reads as it doesn't matter if your allocation size matches the page size (unless it matches the "erasure page" size) ...

    Regards,
    Rudy Wieser

    A 4K selection (default NTFS) would be a poor choice on an 8K
    device, but as long as it has the usual flavor of wear leveling,
    the pages can be recycled as needed. Maybe it costs you two writes,
    to do 4K clusters on 8K pages, the second write being 8KB of data.
    The second half of a page should be write-able, even if it wasn't
    used on the first attempt. But if the stick did that, the storage
    would be hard to manage.

    I would say, having a larger cluster is likely a better idea.

    And you can use the RidgeCrop formatter, if you want to customize
    the FAT32 on a 128GB USB stick (as Microsoft doesn't do FAT32 above
    a certain size).

    In the case of NTFS, you can easily select 64KB clusters if you want, but if you're doing 50000 files, be aware of the "wasted space" problem. I've
    used 64K clusters on a backup partition before. For no particular
    reason except to try it.

    One other thing, regarding the heat issue. It seems some of the
    flash chips are 3.3V ones, the USB stick is running off a 5V supply,
    and there could be a dropping regulator of some sort to power the flash.
    That might be where some of the heat comes from.

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brian Gregory@21:1/5 to R.Wieser on Fri Jan 21 02:03:29 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 19/01/2022 10:38, R.Wieser wrote:
    Paul,

    Wikipedia says the page size is 4KB to 16KB.

    Which, in short, means three choices : 4K, 8K or 16K , as those numbers need to be powers of two.

    Though that page also mentions something about an "erase page", which size can be multiple millions of bytes. And somehow that reads as it doesn't matter if your allocation size matches the page size (unless it matches the "erasure page" size) ...

    Why would any size except the "erase page" size be at all relevant to
    what we're discussing?

    --
    Brian Gregory (in England).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Fri Jan 21 11:05:19 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Brian,

    Why would any size except the "erase page" size be at all relevant to what we're discussing?

    Good question. Why don't you start with telling us why it doesn't. ? :-)

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brian Gregory@21:1/5 to R.Wieser on Wed Jan 26 13:57:57 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 21/01/2022 10:05, R.Wieser wrote:
    Brian,

    Why would any size except the "erase page" size be at all relevant to what >> we're discussing?

    Good question. Why don't you start with telling us why it doesn't. ? :-)

    Any other size mismatch can be easily worked around without having to
    wait for the Flash chips to do anything.


    --
    Brian Gregory (in England).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Wed Jan 26 16:11:49 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Brian,

    Why would any size except the "erase page" size be at all relevant to
    what
    we're discussing?

    Good question. Why don't you start with telling us why it doesn't. ? :-)

    Any other size mismatch can be easily worked around without having to wait for the Flash chips to do anything.

    Than you have a problem. I do not know of any computer, Windows or otherwise, which has a cluster size of a few megabytes. IOW, you will /always/ have a mismatch in regard to the erase-page size.

    Also, *what* other size that can be mismatched ?

    And by the way : try to imagine what needs to be done when someone writes a block matching some other smaller block size into a erase-block sized page which causes the need to have bits changed from a Zero to a One.

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brian Gregory@21:1/5 to R.Wieser on Wed Jan 26 21:21:18 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 26/01/2022 15:11, R.Wieser wrote:
    Brian,

    Why would any size except the "erase page" size be at all relevant to
    what
    we're discussing?

    Good question. Why don't you start with telling us why it doesn't. ? :-)

    Any other size mismatch can be easily worked around without having to wait >> for the Flash chips to do anything.

    Than you have a problem. I do not know of any computer, Windows or otherwise, which has a cluster size of a few megabytes. IOW, you will /always/ have a mismatch in regard to the erase-page size.

    Also, *what* other size that can be mismatched ?

    If your cluster size is smaller than the minimum block size you can read
    or write then you'll need to work around that sometimes but I don't
    think you need to erase and re-write anything extra compared to if the read/write size matches the allocation block size.


    And by the way : try to imagine what needs to be done when someone writes a block matching some other smaller block size into a erase-block sized page which causes the need to have bits changed from a Zero to a One.

    But it does happen less if you make the allocation blocks bigger. Bigger allocation block size means your erase size blocks get "fragmented" less.

    --
    Brian Gregory (in England).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Thu Jan 27 02:28:44 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Brian,

    If your cluster size is smaller than the minimum block size you can read
    or write then you'll need to work around that

    That also needs explanation I'm afraid. What problem (do you think)
    (c|w)ould occur and what would the work-around be ?

    but I don't think you need to erase and re-write anything extra compared
    to if the read/write size matches the allocation block size.

    Really ? Lets assume that :

    1) I write (a cluster) to the latter part of the first allocation block and
    the first part of the second allocation block

    2) both allocation blocks already contain different data than whats written
    to them

    3) both allocation blocks are in the same erasure page.

    What would the flash chip do?

    Now imagine that the cluster size exactly matches the allocation block (and
    is aligned with it).

    But it does happen less if you make the allocation blocks bigger. Bigger allocation block size means your erase size blocks get "fragmented" less.

    While its true its also a red herring. Its not something you or I have
    control over. Also, why do you think the producer of those flash chips did
    not think of that ? I mean, just make the allocation block the same size as the erasure page size and presto! no more fragmentation ... :-p

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Slootweg@21:1/5 to R.Wieser on Thu Jan 27 15:43:45 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    R.Wieser <address@not.available> wrote:
    [...]

    Than you have a problem. I do not know of any computer, Windows or otherwise, which has a cluster size of a few megabytes. IOW, you will /always/ have a mismatch in regard to the erase-page size.

    Not that it matters one bit for your problem/discussion, but exFAT on
    Windows (at least on 8.1) can have allocation unit sizes of upto 32768 kilobytes, i.e. 32 megabytes. "a few megabytes" starts at 2MB (then 4, 8,
    16, 32MB).

    [...]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Thu Jan 27 19:05:53 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Frank,

    but exFAT on Windows (at least on 8.1) can have allocation unit sizes
    of upto 32768 kilobytes, i.e. 32 megabytes.

    That doesn't even sound odd to me when looking at the ammount of memory a current computer can have. IOW, they can probably handle it. Would not
    want to try to use that FS on my old C64 though .... :-)

    "a few megabytes" starts at 2MB (then 4, 8, 16, 32MB).

    I have to remember that. Many of my files are quite a bit smaller than
    that, which would fill up such an FS a bit faster than you would expect
    (lots of slack space).

    Thanks for the info.

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mayayana@21:1/5 to Frank Slootweg on Thu Jan 27 13:22:02 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    "Frank Slootweg" <this@ddress.is.invalid> wrote

    |
    | Not that it matters one bit for your problem/discussion, but exFAT on
    | Windows (at least on 8.1) can have allocation unit sizes of upto 32768
    | kilobytes, i.e. 32 megabytes. "a few megabytes" starts at 2MB (then 4, 8,
    | 16, 32MB).

    Great. So you can fit up to 31 100 KB files on a 1 TB disk. :)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Slootweg@21:1/5 to Mayayana on Thu Jan 27 18:45:41 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Mayayana <mayayana@invalid.nospam> wrote:
    "Frank Slootweg" <this@ddress.is.invalid> wrote

    |
    | Not that it matters one bit for your problem/discussion, but exFAT on
    | Windows (at least on 8.1) can have allocation unit sizes of upto 32768
    | kilobytes, i.e. 32 megabytes. "a few megabytes" starts at 2MB (then 4, 8,
    | 16, 32MB).

    Great. So you can fit up to 31 100 KB files on a 1 TB disk. :)

    Gimme some of what you're smoking! :-)

    MB, not GB.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to Mayayana on Thu Jan 27 14:35:27 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 1/27/2022 1:22 PM, Mayayana wrote:
    "Frank Slootweg" <this@ddress.is.invalid> wrote

    |
    | Not that it matters one bit for your problem/discussion, but exFAT on
    | Windows (at least on 8.1) can have allocation unit sizes of upto 32768
    | kilobytes, i.e. 32 megabytes. "a few megabytes" starts at 2MB (then 4, 8,
    | 16, 32MB).

    Great. So you can fit up to 31 100 KB files on a 1 TB disk. :)


    It's a capability.

    It doesn't mean that it is practical, to select the ExFAT high valued ones.

    NTFS on Windows 10 has also been given some larger options.
    64KB clusters should work just about anywhere. But the format
    menu has some >64KB options only on Windows 10. And I won't be
    selecting those, any time soon... unless the choice is backward
    compatible all the way back to Win2K. It's no good having partitions
    that only mount in Windows 10, and no where else. For one thing,
    you want the option of using the CHKDSK from the other OSes,
    on occasion.

    You can use outlandish choices like that, if your USB stick holds
    only MRIMG files from Macrium backups. That's the scenario where
    it makes sense.

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mayayana@21:1/5 to Frank Slootweg on Thu Jan 27 15:53:59 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    "Frank Slootweg" <this@ddress.is.invalid> wrote

    | > Great. So you can fit up to 31 100 KB files on a 1 TB disk. :)
    |
    | Gimme some of what you're smoking! :-)
    |
    | MB, not GB.

    Woops. Sorry, you're right. So I could save 31,000
    small text files. But I couldn't fit the 42K files I have
    on my 3 GB backup drive on that 1 TB drive.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brian Gregory@21:1/5 to R.Wieser on Fri Jan 28 10:09:28 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 27/01/2022 01:28, R.Wieser wrote:
    Brian,

    If your cluster size is smaller than the minimum block size you can read
    or write then you'll need to work around that

    That also needs explanation I'm afraid. What problem (do you think) (c|w)ould occur and what would the work-around be ?

    but I don't think you need to erase and re-write anything extra compared
    to if the read/write size matches the allocation block size.

    Really ? Lets assume that :

    1) I write (a cluster) to the latter part of the first allocation block and the first part of the second allocation block

    What are you meaning by allocation block?

    I was using the name "allocation block" to mean what also gets called
    cluster - the filing system allocation block size.

    I guess you mean the block size in the flash.

    Firstly I'm assuming the sizes are binary multiples, so no cluster
    crosses a boundary between flash blocks.

    Secondly I'm assuming that the storage system is managing storage space
    and doing wear levelling and will try and arrange things so that when
    possible (hopefully most of the time in many cases) block writes are
    directed to areas of flash that are in the erased state (you can have a
    flash block that is divided up into clusters and leave unused clusters
    in the erased state.

    --
    Brian Gregory (in England).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From R.Wieser@21:1/5 to All on Fri Jan 28 14:27:43 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    Brian,

    What are you meaning by allocation block?

    :-) What did *you* mean with it ?

    I was using the name "allocation block" to mean what also gets called
    cluster - the filing system allocation block size.

    Really ? Than what do you call a *flash* (not FS) read/write block (no, not the erase-page one) ? 'cause that is what we are talking about, right ?

    Also, your message <j4l3s0Fb8r1U1@mid.individual.net> (17-01-22 12:56:47)
    seems to contradict the above ...

    And why /did/ you chose to use an ambigue name like "allocation block" when there is, in regard to a harddisk, a perfectly good (and rather distinctive) "cluster" definition* available ?

    * a rather good definition, as I could as easily assume that an "allocation block" on a harddisk would be referring to a sector.

    I guess you mean the block size in the flash.

    Lol ? Which "block size in the flash" are you here referring to ? The read/write one ? the erase-page one ? Maybe yet another ?

    Firstly I'm assuming the sizes are binary multiples, so no cluster crosses
    a boundary between flash blocks.

    See above. It fully depends on what you are referring to with that "flash block".

    On the off chance that you're referring to the flashes read/write page than
    the opposite is true - the read/write page matches or is smaller than the FS cluster thats projected ontop of it.

    Secondly I'm assuming that the storage system is managing storage space
    and doing wear levelling

    Its *hard* to respond to an example, isn't it ? It forces you to actually /think/ about the truths you hold as evident. Better make some
    assumptions to make that kind of effort go away. :-\

    But as you seem to have no intention to actually answer me it makes no sense
    to me to continue our conversation. So, goodbye.

    Regards,
    Rudy Wieser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brian Gregory@21:1/5 to R.Wieser on Sat Jan 29 01:32:07 2022
    XPost: microsoft.public.windowsxp.general, alt.windows7.general

    On 28/01/2022 13:27, R.Wieser wrote:
    Brian,

    What are you meaning by allocation block?

    :-) What did *you* mean with it ?

    I was using the name "allocation block" to mean what also gets called
    cluster - the filing system allocation block size.

    Really ? Than what do you call a *flash* (not FS) read/write block (no, not the erase-page one) ? 'cause that is what we are talking about, right ?

    Also, your message <j4l3s0Fb8r1U1@mid.individual.net> (17-01-22 12:56:47) seems to contradict the above ...

    And why /did/ you chose to use an ambigue name like "allocation block" when there is, in regard to a harddisk, a perfectly good (and rather distinctive) "cluster" definition* available ?

    * a rather good definition, as I could as easily assume that an "allocation block" on a harddisk would be referring to a sector.


    Windows calls what we used to call clusters "Allocation blocks".





    chkdsk
    The type of the file system is NTFS.
    Volume label is Local Disk 1.

    WARNING! F parameter not specified.
    Running CHKDSK in read-only mode.

    CHKDSK is verifying files (stage 1 of 3)...
    127232 file records processed.
    File verification completed.
    13407 large file records processed.
    0 bad file records processed.
    0 EA records processed.
    25 reparse records processed.
    CHKDSK is verifying indexes (stage 2 of 3)...
    139456 index entries processed.
    Index verification completed.
    0 unindexed files scanned.
    0 unindexed files recovered.
    CHKDSK is verifying security descriptors (stage 3 of 3)...
    127232 file SDs/SIDs processed.
    Security descriptor verification completed.
    6113 data files processed.
    CHKDSK is verifying Usn Journal...
    37122192 USN bytes processed.
    Usn Journal verification completed.
    Windows has checked the file system and found no problems.

    1023998975 KB total disk space.
    999706604 KB in 63950 files.
    36132 KB in 6114 indexes.
    0 KB in bad sectors.
    263007 KB in use by the system.
    65536 KB occupied by the log file.
    23993232 KB available on disk.

    4096 bytes in each allocation unit.
    255999743 total allocation units on disk.
    5998308 allocation units available on disk.







    See.





    I guess you mean the block size in the flash.

    Lol ? Which "block size in the flash" are you here referring to ? The read/write one ? the erase-page one ? Maybe yet another ?


    It's up to you to be less ambiguous.



    Firstly I'm assuming the sizes are binary multiples, so no cluster crosses >> a boundary between flash blocks.

    See above. It fully depends on what you are referring to with that "flash block".

    It should apply to any of them.


    On the off chance that you're referring to the flashes read/write page than the opposite is true - the read/write page matches or is smaller than the FS cluster thats projected ontop of it.


    Then I have no idea what point you were trying to make.

    If you think that those blocks are smaller then no read modify write is required.


    Secondly I'm assuming that the storage system is managing storage space
    and doing wear levelling

    Its *hard* to respond to an example, isn't it ? It forces you to actually /think/ about the truths you hold as evident. Better make some
    assumptions to make that kind of effort go away. :-\

    But as you seem to have no intention to actually answer me it makes no sense to me to continue our conversation. So, goodbye.

    Stupid moron.

    --
    Brian Gregory (in England).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)