Hello all,
I've got a simple program using CopyFile (kernel32) on an XPsp3 machine to copy a number of files to a 8GByte, FAT32 formatted USB stick, and notice that the whole thing slows down to a crawl (about 5 GByte consisting of 50.000 files in over eight hours). I've also tried a another, different USB stick and see the same thing happen.
Windows task-manager shows my programs procesor usage as zero percent.
Remark: I've used the same program to copy even larger quantities of files
to an USB harddisk, and have not experienced any kind of slowdown there.
I know that an USB stick is rather slow in comparision to a harddisk, but
the above is just ludicrous. :-(
Does anyone have an idea what is going on and how to speed the whole thing
up ?
Regards,
Rudy Wieser
P.s.
I remember having had similar problems while drag-and-dropping the source folder onto the USB stick. IOW, not a problem with the program itself.
Maybe because it's USB2?
The only other thing I can think of would be if you were
using some kind of AV program.
I could use a Sandisk Extreme or Sandisk Extreme Pro and this transfer
would be finished in two minutes, tops.
If you wanted USB3 on your Windows XP machine,
Hello all,
I've got a simple program using CopyFile (kernel32) on an XPsp3 machine to >copy a number of files to a 8GByte, FAT32 formatted USB stick, and notice >that the whole thing slows down to a crawl (about 5 GByte consisting of >50.000 files in over eight hours). I've also tried a another, different USB >stick and see the same thing happen.
Windows task-manager shows my programs procesor usage as zero percent.
Remark: I've used the same program to copy even larger quantities of files
to an USB harddisk, and have not experienced any kind of slowdown there.
I know that an USB stick is rather slow in comparision to a harddisk, but
the above is just ludicrous. :-(
Does anyone have an idea what is going on and how to speed the whole thing
up ?
Hello all,
I've got a simple program using CopyFile (kernel32) on an XPsp3 machine to copy a number of files to a 8GByte, FAT32 formatted USB stick, and notice that the whole thing slows down to a crawl (about 5 GByte consisting of 50.000 files in over eight hours). I've also tried a another, different USB stick and see the same thing happen.
Windows task-manager shows my programs procesor usage as zero percent.
Remark: I've used the same program to copy even larger quantities of files
to an USB harddisk, and have not experienced any kind of slowdown there.
I know that an USB stick is rather slow in comparision to a harddisk, but
the above is just ludicrous. :-(
Does anyone have an idea what is going on and how to speed the whole thing
up ?
Regards,
Rudy Wieser
P.s.
I remember having had similar problems while drag-and-dropping the source folder onto the USB stick. IOW, not a problem with the program itself.
My totally unscientific observation is that writing to a USB2 device
starts out fast, and when it inevitably slows to a crawl I notice
that the device is blazing hot.
If I allow it to cool down,
Char,
My totally unscientific observation is that writing to a USB2 device
starts out fast, and when it inevitably slows to a crawl I notice
that the device is blazing hot.
I tried checking the sticks temperature (removed the "sleeve") and regulary touched both the processor as well as the flash chip on the other side. I would not call it hot by any means, just a bit warm.
If I allow it to cool down,
I have to try to add a keystroke to halt the copying temporarily and see
what happens.
(@all)
And by the way, it looks like the slowing-down is not gradually (as I would expect when a device gets hotter), but suddenly. It took 45 minutes to
copy about 4 GByte, and after that each file takes 6 seconds.
Odd to say the least ...
Regards,
Rudy Wieser
I bet the problem does not occur .... if you do the copy from a Command Prompt window.
On Windows XP, Robocopy can be installed via a download.
Char,
My totally unscientific observation is that writing to a USB2 device
starts out fast, and when it inevitably slows to a crawl I notice
that the device is blazing hot.
I tried checking the sticks temperature (removed the "sleeve") and regulary >touched both the processor as well as the flash chip on the other side. I >would not call it hot by any means, just a bit warm.
Well, I've aborted the copy after about an hour, with just 1.6 GByte (outof
5 GByte) having been copied. As such it does/much/ worse than the second stick.
Thanks for checking.
On at least two of my USB2 sticks, they get too hot to touch
if I do extended writes, but it sounds like you've disproved
my theory.
What is the setting for "Enfernungsrichtlinie" in properties of the disk?
Herbert,
What is the setting for "Enfernungsrichtlinie" in properties of the disk?
When the memory stick was FAT32 formatted stick it was the first. Later I formatted it to NTFS, which a choice you only get when the second is
enabled.
In short, I have had both for that stick. Didn't make much of a difference.
On 2022-01-13, R.Wieser <address@not.available> wrote:
Herbert,
What is the setting for "Enfernungsrichtlinie" in properties of the disk?
When the memory stick was FAT32 formatted stick it was the first. Later I formatted it to NTFS, which a choice you only get when the second is enabled.
In short, I have had both for that stick. Didn't make much of a difference.
I had a thumb drive formatted as NTFS for quite some time. The performance really sucked, but I stuck with it because FAT32 only stores time stamps to the nearest 2 seconds, making time stamp comparison (and makefiles) problematic.
but I stuck with it because FAT32 only stores time stamps to
the nearest 2 seconds, making time stamp comparison (and
makefiles) problematic.
However, as you noted earlier, I also noticed that copying large numbers
of small files to a flash drive runs like molasses through a pinhole in January,
so I resorted to the solution that has already been mentioned here: zip
the files on your hard drive and then copy the .zip file to the flash
drive.
Zipping directly to the flash drive causes many small work files to be created and scratched on it, again resulting in abominable performance.
Herbert,
What is the setting for "Enfernungsrichtlinie" in properties of the disk?
Charlie,
but I stuck with it because FAT32 only stores time stamps to
the nearest 2 seconds, making time stamp comparison (and
makefiles) problematic.
I encountered a similar problem : even a just-copied file could be a number >of seconds different from the origional. As I'm not in the habit of running >backups while working on my machine I decided to allow for a 10 second >difference in timestamps.
That, and problems with summer/wintertime differences. :-\
However, as you noted earlier, I also noticed that copying large numbers
of small files to a flash drive runs like molasses through a pinhole in
January,
Hmmm... Although comperativily it takes an USB stick quite a bit longer
than a USB HD, I do not really see any slow-down over the duration of the >copying itself.
so I resorted to the solution that has already been mentioned here: zip
the files on your hard drive and then copy the .zip file to the flash
drive.
I also considered that, but that would have thrown quite a spanner in what
my program was actually build for : checking for changed files and >updating/deleting them on the copy.
Also, I wanted to be sure that regardless of how much space I had left on
the source I would always be able to make a backup or update it.
Next to that I also considered the possibility of a write or read error (bad >sector) could trash a compressed file, with little-to-no chance of recovery.
But AFAIK/AFAICT, USB memory sticks do not have a (fast removal versus
fast performance) removal policy, at least they don't on my Windows 8.1 system.
One of the reasons that I like to use containers, such as zip or rar,
is that data corruption is much easier to spot.
With bare files that aren't in a container, bit rot can go undetected
for a long period ...
My main tool for detecting bit rot, and then *correcting* it,
is Quickpar.
But AFAIK/AFAICT, USB memory sticks do not have a (fast removal versus fast performance) removal policy, at least they don't on my Windows 8.1 system.
Frank,
P.s.
The way you quoted it made it look as if /I/ posted that "what if" line ...
On 14.01.2022 16:16, Frank Slootweg wrote:
But AFAIK/AFAICT, USB memory sticks do not have a (fast removal versus fast performance) removal policy, at least they don't on my Windows 8.1 system.
in explorer right-click on the external usb drive
select "Properties"
on the Hardware tab, select your device and select "Properties"
select "Change Settings" on the General tab.
select the "Policies" tab
Char,
One of the reasons that I like to use containers, such as zip or rar,
is that data corruption is much easier to spot.
You mean, the first time you try to use /any/ file outof it it barfs and >refuses to let you access most, if not all of its contained files ? :-|
With bare files that aren't in a container, bit rot can go undetected
for a long period ...
True. But than I only lose the affected file(s), not the whole (or
many/most of) the container.
... and once found, it can be very difficult to correct.
Harder than fixing an bit-rotted container ? :-p
But granted, the sooner you become aware of a problem with a backup the >better.
My main tool for detecting bit rot, and then *correcting* it,
is Quickpar.
:-) You make it sound as if you can just throw a file at it and it will >detect and correct bitrot in it. Which is ofcourse not quite the way it >works.
Though granted, pulling the files thru such a program (which adds >detecting/recovery information to the file) before saving them to a backup >does enable the files to survive a decent amount of bitrot or other similary >small damage.
I suppose so, but I'd say that's preferable to the alternative, which
is not knowing
I suppose each file would have to be loaded in some way...
but all of the major container types give you an easy method to
detect corruption of the container itself
Right, you have to plan ahead. You have to ask yourself, are these
files something that I care about?
If so, how much damage do I want to be prepared to correct?
Quickpar creates additional files
Sorry, a case of sloppy reading! I thought you didn't understand
Herbert's German text.
I will now crawl back under my rock. :-)
Char,
I suppose so, but I'd say that's preferable to the alternative, which
is not knowing
Hmmm... not knowning versus losing everything. I think I would go for not >knowing.
That's a false choice, obviously. :-)
The choice is knowing or not knowing.
It's much, much easier to check a container (and by doing so,
check all of its contents in one fell swoop), than to check
individual files.
It sounds like you have a solution that works for you, though, so
it's all good.
I've got a simple program using CopyFile (kernel32) on an XPsp3 machine to >copy a number of files to a 8GByte, FAT32 formatted USB stick, and notice >that the whole thing slows down to a crawl (about 5 GByte consisting of >50.000 files in over eight hours). I've also tried a another, different USB >stick and see the same thing happen.
Sometimes, especially on the XP machine, the USB connection drops in
the middle of a transfer, and then I need to run CHKDSK on the flash
drive, otherwise it slows down or might lose data.
Frank,
Sorry, a case of sloppy reading! I thought you didn't understand
Herbert's German text.
I'm actually from his west-side next-door neighbour, the Netherlands.
A lot of "nachsynchronisierter schaukasten films" (hope I wrote that right) in my youth and a year getting at school enabled me to read most of it.
I will now crawl back under my rock. :-)
Ah, you do not need to go /that/ far... :-p
Yes, I know that, like me, you're a Dutchie. In the beginning your
lastname had me thinking you were German, but that was a long time ago.
I hate(d) those films. Earth to Germany: John Wayne isn't (wasn't) a
German. And - in news programs - neither is Joe Biden.
There's this thing called 'subtitles', you should try it some time! :-(
I will now crawl back under my rock. :-)
Ah, you do not need to go /that/ far... :-p
Well, it's actually quite cosy! :-)
Paul,
Thanks for the suggestions.
The reason why I copy files is that I can than access them using any kind of filebrowser - even a simple one which doesn't understand the concept of ZIP folders.
As for the cluster size, the second USB stick was new, and as such I assume it was pre-formatted with the optimum block size. Also, it was NTFS (I had hoped it would make a difference, but it looks like it doesn't)
And there is a problem with that suggestion : Years ago, when I thought of the same (micro-SD connected to a DOS 'puter), I was unable to get that optimum from the USB stick itself. No matter which drive interrogation function I looked at, none of them returned the size of the FLASH chips memory blocks . :-(
As a rule of thumb always format USB flash drives with 32K allocation
blocks.
A possible alternative is to remember what size blocks were used when it
was brand new.
By default Windoze tends to format USB flash with blocks that are much too small.
It's not universal. Block size only relates to the limits
of 4-byte integers. I don't know whether the counting is
an unsigned or signed number, but the blocks have to
be big enough that the number of them doesn't end up
more than the limit of a long integer on a 32-bit system.
"R.Wieser" <address@not.available> wrote
| > As a rule of thumb always format USB flash drives with 32K allocation
| > blocks.
|
| Thanks. I've heard that number before, but never thought of it as an
| universal one (true for sticks of any make and size). I've always assumed
| that the flash-chips block size could/would change with the chips size.
|
It's not universal. Block size only relates to the limits
of 4-byte integers. I don't know whether the counting is
an unsigned or signed number, but the blocks have to
be big enough that the number of them doesn't end up
more than the limit of a long integer on a 32-bit system.
P.s.
I just checked, and the largest "allocation size" I can choose is a mere
4096 bytes ....
What was your stick-size and type of filesystem again? (I lost
track in this long thread.)
On 2022-01-16, R.Wieser <address@not.available> wrote:
By default Windoze tends to format USB flash with blocks that are much too >>> small.
And there I was, assuming that MS would know more about this that I could
ever hope to do, and therefore use some sensible default. :-(
Sensible default? Microsoft? Thanks - I needed a laugh this morning.
By default Windoze tends to format USB flash with blocks that are much too >> small.
And there I was, assuming that MS would know more about this that I could ever hope to do, and therefore use some sensible default. :-(
Frank,
What was your stick-size and type of filesystem again? (I lost
track in this long thread.)
Its an 8 GByte stick currently NTFS formatted.
Regards,
Rudy Wieser
Brian,
As a rule of thumb always format USB flash drives with 32K allocation
blocks.
Thanks. I've heard that number before, but never thought of it as an universal one (true for sticks of any make and size). I've always assumed that the flash-chips block size could/would change with the chips size.
If you want it to work fast don't use NTFS. Use FAT or exFAT.
It probably does change. But we're not needing an exact match to get a
speed benefit.
Having a big block on the drive divided into many small allocation blocks seems to be what can slow things down.
Steve,
Sometimes, especially on the XP machine, the USB connection drops in
the middle of a transfer, and then I need to run CHKDSK on the flash
drive, otherwise it slows down or might lose data.
Thanks for the suggestion. Someone else mentioned that his stick heats up while writing and causes problems like I described. Maybe yours has the same problem ?
After I posted my above question and not getting a "Thats a well-known problem" response I've done a number of tests. As it turns out the problem is that specific USB stick (see the further posts I made)
As such I decided to effectivily bin it (I can still muck around with it,
but it won't ever be used for storage again)
Regards,
Rudy Wieser
Might be interesting to open it up and add a heat sink and see what
happens.
Brian,
If you want it to work fast don't use NTFS. Use FAT or exFAT.
I only reformatted to NTFS to check if the problem was perhaps files-system related. It doesn't seem to be.
It probably does change. But we're not needing an exact match to get a
speed benefit.
Speed is one thing. Not needing reading-modify-write the same flash-block
is another. And that will happen if the allocation size is not the same
as, or a multiple of, the flash-block size.
Having a big block on the drive divided into many small allocation blocks
seems to be what can slow things down.
:-) you or the stick needing to do multiple read-modify-write actions will /definitily/ slow the stick down more than a simple write action - even if you just divide the flash block in (an uneven) two.
IOW, I do understand the mechanics behind it. Which is why I'm (still) a
bit surprised you cannot ask the stick for its optimum allocation size ...
Hello all,
I've got a simple program using CopyFile (kernel32) on an XPsp3 machine to copy a number of files to a 8GByte, FAT32 formatted USB stick, and notice that the whole thing slows down to a crawl (about 5 GByte consisting of 50.000 files in over eight hours). I've also tried a another, different USB stick and see the same thing happen.
I think the flash used often has much larger pages than would make sense
as an allocation block size.
On 12/01/2022 11:28, R.Wieser wrote:
Hello all,
I've got a simple program using CopyFile (kernel32) on an XPsp3 machine to >> copy a number of files to a 8GByte, FAT32 formatted USB stick, and notice
that the whole thing slows down to a crawl (about 5 GByte consisting of
50.000 files in over eight hours). I've also tried a another, different USB
stick and see the same thing happen.
If you can try with XCOPY.
XCOPY is good at optimizing the writes to the directory.
I think it opens and creates all the files first, then copies the contents one by one, and finally closes all the files.
If you cleaned the stick and prepared it on WinXP, then bits of[]
the geometry are divisible by 63, and that is not a power of two
number, and that doubles the amount of work for writes (assuming
lots of small files, not just one big file).
On Tue, 18 Jan 2022 at 13:04:09, Paul <nospam@needed.invalid> wrote (my responses usually follow points raised):
[]
If you cleaned the stick and prepared it on WinXP, then bits of[]
the geometry are divisible by 63, and that is not a power of two
number, and that doubles the amount of work for writes (assuming
lots of small files, not just one big file).
How did the 63 size come about? It seems a very odd choice in computing.
Brian,
I think the flash used often has much larger pages than would make sense
as an allocation block size.
There is nothing much I can say to that, as its as unspecific as it comes
...
Regards,
Rudy Wieser
On 1/18/2022 11:42 AM, R.Wieser wrote:
Brian,
I think the flash used often has much larger pages than would make sense >>> as an allocation block size.
There is nothing much I can say to that, as its as unspecific as it comes
...
Regards,
Rudy Wieser
8KB is a good value for a 32GB flash stick.
Wikipedia says the page size is 4KB to 16KB.
Paul,
Wikipedia says the page size is 4KB to 16KB.
Which, in short, means three choices : 4K, 8K or 16K , as those numbers need to be powers of two.
Though that page also mentions something about an "erase page", which size can be multiple millions of bytes. And somehow that reads as it doesn't matter if your allocation size matches the page size (unless it matches the "erasure page" size) ...
Regards,
Rudy Wieser
Paul,
Wikipedia says the page size is 4KB to 16KB.
Which, in short, means three choices : 4K, 8K or 16K , as those numbers need to be powers of two.
Though that page also mentions something about an "erase page", which size can be multiple millions of bytes. And somehow that reads as it doesn't matter if your allocation size matches the page size (unless it matches the "erasure page" size) ...
Why would any size except the "erase page" size be at all relevant to what we're discussing?
Brian,
Why would any size except the "erase page" size be at all relevant to what >> we're discussing?
Good question. Why don't you start with telling us why it doesn't. ? :-)
Why would any size except the "erase page" size be at all relevant to
what
we're discussing?
Good question. Why don't you start with telling us why it doesn't. ? :-)
Any other size mismatch can be easily worked around without having to wait for the Flash chips to do anything.
Brian,
Why would any size except the "erase page" size be at all relevant to
what
we're discussing?
Good question. Why don't you start with telling us why it doesn't. ? :-)
Any other size mismatch can be easily worked around without having to wait >> for the Flash chips to do anything.
Than you have a problem. I do not know of any computer, Windows or otherwise, which has a cluster size of a few megabytes. IOW, you will /always/ have a mismatch in regard to the erase-page size.
Also, *what* other size that can be mismatched ?
And by the way : try to imagine what needs to be done when someone writes a block matching some other smaller block size into a erase-block sized page which causes the need to have bits changed from a Zero to a One.
If your cluster size is smaller than the minimum block size you can read
or write then you'll need to work around that
but I don't think you need to erase and re-write anything extra compared
to if the read/write size matches the allocation block size.
But it does happen less if you make the allocation blocks bigger. Bigger allocation block size means your erase size blocks get "fragmented" less.
Than you have a problem. I do not know of any computer, Windows or otherwise, which has a cluster size of a few megabytes. IOW, you will /always/ have a mismatch in regard to the erase-page size.
but exFAT on Windows (at least on 8.1) can have allocation unit sizes
of upto 32768 kilobytes, i.e. 32 megabytes.
"a few megabytes" starts at 2MB (then 4, 8, 16, 32MB).
"Frank Slootweg" <this@ddress.is.invalid> wrote
|
| Not that it matters one bit for your problem/discussion, but exFAT on
| Windows (at least on 8.1) can have allocation unit sizes of upto 32768
| kilobytes, i.e. 32 megabytes. "a few megabytes" starts at 2MB (then 4, 8,
| 16, 32MB).
Great. So you can fit up to 31 100 KB files on a 1 TB disk. :)
"Frank Slootweg" <this@ddress.is.invalid> wrote
|
| Not that it matters one bit for your problem/discussion, but exFAT on
| Windows (at least on 8.1) can have allocation unit sizes of upto 32768
| kilobytes, i.e. 32 megabytes. "a few megabytes" starts at 2MB (then 4, 8,
| 16, 32MB).
Great. So you can fit up to 31 100 KB files on a 1 TB disk. :)
Brian,
If your cluster size is smaller than the minimum block size you can read
or write then you'll need to work around that
That also needs explanation I'm afraid. What problem (do you think) (c|w)ould occur and what would the work-around be ?
but I don't think you need to erase and re-write anything extra compared
to if the read/write size matches the allocation block size.
Really ? Lets assume that :
1) I write (a cluster) to the latter part of the first allocation block and the first part of the second allocation block
What are you meaning by allocation block?
I was using the name "allocation block" to mean what also gets called
cluster - the filing system allocation block size.
I guess you mean the block size in the flash.
Firstly I'm assuming the sizes are binary multiples, so no cluster crosses
a boundary between flash blocks.
Secondly I'm assuming that the storage system is managing storage space
and doing wear levelling
Brian,
What are you meaning by allocation block?
:-) What did *you* mean with it ?
I was using the name "allocation block" to mean what also gets called
cluster - the filing system allocation block size.
Really ? Than what do you call a *flash* (not FS) read/write block (no, not the erase-page one) ? 'cause that is what we are talking about, right ?
Also, your message <j4l3s0Fb8r1U1@mid.individual.net> (17-01-22 12:56:47) seems to contradict the above ...
And why /did/ you chose to use an ambigue name like "allocation block" when there is, in regard to a harddisk, a perfectly good (and rather distinctive) "cluster" definition* available ?
* a rather good definition, as I could as easily assume that an "allocation block" on a harddisk would be referring to a sector.
chkdskThe type of the file system is NTFS.
I guess you mean the block size in the flash.
Lol ? Which "block size in the flash" are you here referring to ? The read/write one ? the erase-page one ? Maybe yet another ?
Firstly I'm assuming the sizes are binary multiples, so no cluster crosses >> a boundary between flash blocks.
See above. It fully depends on what you are referring to with that "flash block".
On the off chance that you're referring to the flashes read/write page than the opposite is true - the read/write page matches or is smaller than the FS cluster thats projected ontop of it.
Secondly I'm assuming that the storage system is managing storage space
and doing wear levelling
Its *hard* to respond to an example, isn't it ? It forces you to actually /think/ about the truths you hold as evident. Better make some
assumptions to make that kind of effort go away. :-\
But as you seem to have no intention to actually answer me it makes no sense to me to continue our conversation. So, goodbye.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 379 |
Nodes: | 16 (2 / 14) |
Uptime: | 42:33:50 |
Calls: | 8,141 |
Calls today: | 4 |
Files: | 13,085 |
Messages: | 5,857,851 |