• Claim: Cheap USB Sticks have fast memory only at beginning, rest is slo

    From Java Jive@21:1/5 to All on Fri Jan 27 00:02:54 2023
    XPost: alt.os.linux, alt.windows7.general

    Please excuse the Linux/Windows crosspost, this is a question about USB hardware relevant to both OSs!

    In a review raising the spectre of the VW emissions testing scandal all
    over again, an Amazon customer for this 128GB USB 3.1 drive ...

    https://www.amazon.co.uk/dp/B08LG94ZR8/ref=twister_B08R5S2ZWG?th=1

    ... claims that the manufacturer has put fast chips at the beginning so
    that it passes review tests well, but fills up the memory beyond the
    beginning with cheaper very slow chips.

    Like everyone else, I've heard of USB sticks having fractions of the
    memory that they are specified to have, with the memory wrapping back to
    the beginning for address ranges beyond the physical limit actually
    installed, but the above claim is a new con on me. Has anyone else
    encountered this?

    FTR, here's what he says in full (auto-translated from Spanish into
    English) ...

    "Cliente Amazon
    1.0 out of 5 stars Satisfaction and disappointment mixed
    Reviewed in Spain on 10 December 2019
    Verified Purchase

    As on previous occasions I had already happened with other Chinese manufacturers that the external pendrives or SSD disks not only did not
    fulfill what was promised but were also a complete deception, since they
    mixed internally two types of memory (fast at first and very slow
    afterwards), I decided to do a complete test for each of the 10
    pendrives. I will try to explain the process I performed as best as
    possible for less experienced users.

    I proceeded to format each pendrive by unchecking the “Quick Format” option, controlling the initial and final time to determine the actual
    write speed (which is what formatting basically does).

    The joys and disappointments in the process did not wait too long: some
    took about 8 to format (writing speeds of 15~17 MB. /sec., reasonably “high” for usb 2.0) and the rest between 17 and 21 minutes, which means that at most they reached typical usbs 1.0 speeds, with “variable” formatting times (nor did I want to do the speed calculations so as not
    to get bitter).

    Of course, with these pendrive it is true that “random access”...

    What really annoyed me was, when I observed the speed of the process of formatting the “slow” pendrives, that I could see that at first the advance, in general, seemed “normal” for the first/second gigabytes and that then there was, visually, almost a stoppage in the speed of that
    advance until its completion. This, in my opinion, clearly indicates
    that the manufacturer (again, another manufacturer) has mixed chips with “fast” access along with very slow chips, placing access to the fastest “at the beginning” so that the usual test tools give “good results”.

    That is why I, personally, [no longer] use the typical test tools that
    usually only verify a “minimum” part (just the “initial”) of the total capacity of the external disks/pendrives. I prefer to take a little
    longer and perform my writing/formatting tests more comprehensively due
    to the bad experiences I have had with different memory manufacturers.

    However, in my tests none of the pendrives gave write errors: a simple
    method to know is to use chkdsk (in the command console) when the
    formatting is finished and check that the text “bad sectors” does not appear.

    Of the 10 pendrives, 7 had an acceptable speed (one took 10 minutes, but
    I “accept it with resignation” as passable) and 3 were horribly slow.

    As the overall price is reasonable, I don't dare complain about it, but
    I value it with only one star because of the “unexpected surprises” I
    have encountered.

    I'm sorry about the whole thing, but I think this is the only way to
    show the problem and try to help other less experienced users perform
    their own checks more guaranteably."

    --

    Fake news kills!

    I may be contacted via the contact address given on my website:
    www.macfh.co.uk

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From iKook@21:1/5 to Java Jive on Fri Jan 27 01:30:12 2023
    XPost: alt.windows7.general, alt.os.linux

    On 27/01/2023 00:02, Java Jive wrote:


    As on previous occasions I had already happened with other Chinese manufacturers that the external pendrives or SSD disks not only did
    not fulfill what was promised but were also a complete deception,
    since they mixed internally two types of memory (fast at first and
    very slow afterwards), I decided to do a complete test for each of the
    10 pendrives. I will try to explain the process I performed as best as possible for less experienced users.



    Can you explain to us why would Chinese manufacturers do what you
    suggest? How much do they save or to put it in positive terms, how much
    do they make by doing that when the amount of work involved in doing
    what you suggest they are doing. The item costs 14.38 (I believe it is
    UK ) so for a few cents/pennies, are they so stupid to ruin their
    business and reputation? These days everything is made in China so are
    we going to spend time testing everything from that country? It's going
    to be even more expensive for us when there are no alternative
    manufacturers. European and American governments have destroyed their
    factories by outsourcing everything to China and India.

    Amazon Basic have similar items: <Amazon Basics - 128 GB, USB 3.1 Flash
    Drive, Read Speed up to 130 MB/s : Amazon.co.uk: Computers & Accessories <https://www.amazon.co.uk/dp/B0B6148YKN/>>

    You get 128GB for 14.06 and 256GB for 21.74

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to Java Jive on Fri Jan 27 00:07:26 2023
    XPost: alt.os.linux, alt.windows7.general

    On 1/26/2023 7:02 PM, Java Jive wrote:
    Please excuse the Linux/Windows crosspost, this is a question about USB hardware relevant to both OSs!

    In a review raising the spectre of the VW emissions testing scandal all over again, an Amazon customer for this 128GB USB 3.1 drive  ...

    https://www.amazon.co.uk/dp/B08LG94ZR8/ref=twister_B08R5S2ZWG?th=1

    ... claims that the manufacturer has put fast chips at the beginning so that it passes review tests well, but fills up the memory beyond the beginning with cheaper very slow chips.

    The reviews I just glanced at, are for products other than the
    item in question. This is "the Amazon way". Amazon allows bozo traders
    to pour every review they ever got, for tea towels and scone warmers,
    to be poured into the reviews of a USB stick.

    The "lowest tier" of USB3 sticks, reads at 100MB/sec and writes at 10MB/sec. The write rate gets worse, as it ages. I have a 128GB stick, with this characteristic. When an advert does not mention the speed, this is
    what you're getting.

    The write speed never normally matches the read speed. The advert
    presenting 320 read 200 write, is not impossible.

    What you will find, is a bit of poetic license. The write rate
    will probably be around 100 or so. Best case.

    *******

    If capacity fraud has been committed, then yes, it's true
    there is weird write rate behavior. The banana-crisps who test
    these products they buy, are not clever enough to do a write-readverify
    and determine they have been defrauded. "Oh, YES, I did get 2TB of
    storage for £10." Yet, if you ask them to compare the hard drive copy,
    to what is on the stick now, they suddenly become silent.

    *******

    On SSD and NVMe, the TLC flash uses a portion of the flash set up as
    SLC. This is called the SLC cache. Writes happen in two stages.
    Fast write into the SLC cache. Slow later transfer into the main TLC body
    of the device. Once the SLC cache is full, device write rates drop to the
    TLC rate. This behavior can be seen on TLC and QLC drives.

    I'm not aware of USB sticks doing this. The controller is not typically sophisticated enough to be doing this. The USB stick may not have proper
    wear leveling, and may suffer premature failure after a year or two.
    Whereas an MLC stick will "last forever", but of course, are hardly
    ever made any more.

    A few USB devices, use an SSD plus a USB to SATA adapter chip. But
    the form factor of the device is not usually that of a pen drive.
    The device should have better characteristics, but take up more
    desk space and the form factor is a general nuisance.

    *******

    USB sticks have one or two flash chips. In previous generations,
    the controller could be two channel, and the device ran faster
    because it was accessing two flash. The flash now, is I/O is fast enough,
    the two devices could share a single channel. This makes the
    controller chip slightly cheaper to build.

    A few USB sticks (the square-ish Patriot Extreme), could have
    four flash chips. But you can't plug two of those in next to
    one another, so it's one of those per stack.

    While you could go to all the trouble, of using two different
    flash SKUs on a single stick, the quality of the flash used
    in general is so poor, who really cares ? If a TLC wears out in
    a year or two, and receives light usage (is not used as an OS
    boot drive or something), then does it really matter if the
    floor sweepings had different things written on them ? Presumably
    the configuration utility they use at the factory, could handle
    this behavior, but it would likely be a nuisance when setting
    up the devices.

    Even reputable brands, have poor flash in them.

    My OCZ Rally2, laughs at my other sticks, and their shenanigans.
    Sure, the stick had small capacity, it went slow, but... it
    has not degraded, it does not slow with age. It actually
    fucking well works. We will never see another stick like it.

    Summary: If the review you saw was for a 2TB capacity device,
    the speed variation could be evidence of capacity fraud,
    but only a simple write-readverify can confirm your data
    is not being stored on the device.

    Check through the reviews, and see if there are any for
    a 128GB stick or not. So at least the review item is not
    "a tea towel or a candle holder".

    Verified Purchase
    Ausgewiesen sind bis 350 MB/s Lesen und 200 MB/s Schreiben.
    Gemessen wurde an einem Thunderbolt 3 Anschluß mit Rampow USB C Adapter, Test sequentiell:
    Check Flash: Lesen 230 MB/s - Schreiben: 81 MB/s <=== so it's writing at 80... wot a surprise
    HD Tune Pro: Lesen 300 MB/s - Schreiben: 83 MB/s
    Win Explorer: Lesen 308 MB/s - Schreiben: 77 MB/s
    Nur 3 Sterne, da die Schreibgeschwindigkeit doch stark vermindert ist.
    [Only 3 stars, since the writing speed is greatly reduced]

    As for the availability of branded sticks we recognize,
    the availability of those is pretty weird. They do not
    seem to be manufactured right now, at normal rates. Even though
    there is an excess of flash chips around. I have no idea what
    fab makes the controllers. Probably not TSMC. There are other
    fabs and they make 22nm or 12nm chips. There are even fabs in
    India.

    https://www.amazon.ca/SanDisk-SDCZ880-128G-G46-Extreme-128GB-Solid/dp/B01MU8TZRV

    "Mostly fantastic! A little misleading in a way, though. (Failed a year and half later btw)"

    And that's the thing. You can actually get decent read/write rates, but life ? ...

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Joerg Lorenz@21:1/5 to All on Fri Jan 27 06:40:01 2023
    XPost: alt.os.linux, alt.windows7.general

    Am 27.01.23 um 01:02 schrieb Java Jive:
    Please excuse the Linux/Windows crosspost, this is a question about USB hardware relevant to both OSs!

    In a review raising the spectre of the VW emissions testing scandal all
    over again, an Amazon customer for this 128GB USB 3.1 drive ...

    https://www.amazon.co.uk/dp/B08LG94ZR8/ref=twister_B08R5S2ZWG?th=1

    ... claims that the manufacturer has put fast chips at the beginning so
    that it passes review tests well, but fills up the memory beyond the beginning with cheaper very slow chips.

    This thread is completely OT in this group and has absolutely nothing to
    do with Linux or Windows even not with Mac which you forgot. And this
    lengthy considerations do not reach the people you want to reach.

    --
    Gutta cavat lapidem (Ovid)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charlie+@21:1/5 to All on Fri Jan 27 07:02:53 2023
    XPost: alt.os.linux, alt.windows7.general

    On Fri, 27 Jan 2023 00:02:54 +0000, Java Jive <java@evij.com.invalid>
    wrote as underneath :

    Please excuse the Linux/Windows crosspost, this is a question about USB >hardware relevant to both OSs!

    In a review raising the spectre of the VW emissions testing scandal all
    over again, an Amazon customer for this 128GB USB 3.1 drive ...

    https://www.amazon.co.uk/dp/B08LG94ZR8/ref=twister_B08R5S2ZWG?th=1

    ... claims that the manufacturer has put fast chips at the beginning so
    that it passes review tests well, but fills up the memory beyond the >beginning with cheaper very slow chips.

    Like everyone else, I've heard of USB sticks having fractions of the
    memory that they are specified to have, with the memory wrapping back to
    the beginning for address ranges beyond the physical limit actually >installed, but the above claim is a new con on me. Has anyone else >encountered this?

    FTR, here's what he says in full (auto-translated from Spanish into
    English) ...

    "Cliente Amazon
    1.0 out of 5 stars Satisfaction and disappointment mixed
    Reviewed in Spain on 10 December 2019
    Verified Purchase

    As on previous occasions I had already happened with other Chinese >manufacturers that the external pendrives or SSD disks not only did not >fulfill what was promised but were also a complete deception, since they >mixed internally two types of memory (fast at first and very slow >afterwards), I decided to do a complete test for each of the 10
    pendrives. I will try to explain the process I performed as best as
    possible for less experienced users.

    I proceeded to format each pendrive by unchecking the Quick Format
    option, controlling the initial and final time to determine the actual
    write speed (which is what formatting basically does).

    The joys and disappointments in the process did not wait too long: some
    took about 8 to format (writing speeds of 15~17 MB. /sec., reasonably
    high for usb 2.0) and the rest between 17 and 21 minutes, which means
    that at most they reached typical usbs 1.0 speeds, with variable
    formatting times (nor did I want to do the speed calculations so as not
    to get bitter).

    Of course, with these pendrive it is true that random access...

    What really annoyed me was, when I observed the speed of the process of >formatting the slow pendrives, that I could see that at first the
    advance, in general, seemed normal for the first/second gigabytes and
    that then there was, visually, almost a stoppage in the speed of that
    advance until its completion. This, in my opinion, clearly indicates
    that the manufacturer (again, another manufacturer) has mixed chips with >fast access along with very slow chips, placing access to the fastest
    at the beginning so that the usual test tools give good results.

    That is why I, personally, [no longer] use the typical test tools that >usually only verify a minimum part (just the initial) of the total >capacity of the external disks/pendrives. I prefer to take a little
    longer and perform my writing/formatting tests more comprehensively due
    to the bad experiences I have had with different memory manufacturers.

    However, in my tests none of the pendrives gave write errors: a simple
    method to know is to use chkdsk (in the command console) when the
    formatting is finished and check that the text bad sectors does not
    appear.

    Of the 10 pendrives, 7 had an acceptable speed (one took 10 minutes, but
    I accept it with resignation as passable) and 3 were horribly slow.

    As the overall price is reasonable, I don't dare complain about it, but
    I value it with only one star because of the unexpected surprises I
    have encountered.

    I'm sorry about the whole thing, but I think this is the only way to
    show the problem and try to help other less experienced users perform
    their own checks more guaranteably."

    Why dont testers use h2testw.exe? So simple, so accurate and spots wrap
    around every time if the complete drive is tested, and gives the actual
    read and write average speeds. C+

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E.R.@21:1/5 to Pancho on Fri Jan 27 12:53:03 2023
    XPost: alt.os.linux, alt.windows7.general

    On 2023-01-27 12:20, Pancho wrote:
    On 27/01/2023 00:02, Java Jive wrote:
    Please excuse the Linux/Windows crosspost, this is a question about
    USB hardware relevant to both OSs!

    In a review raising the spectre of the VW emissions testing scandal
    all over again, an Amazon customer for this 128GB USB 3.1 drive  ...

    https://www.amazon.co.uk/dp/B08LG94ZR8/ref=twister_B08R5S2ZWG?th=1

    ... claims that the manufacturer has put fast chips at the beginning
    so that it passes review tests well, but fills up the memory beyond
    the beginning with cheaper very slow chips.

    Like everyone else, I've heard of USB sticks having fractions of the
    memory that they are specified to have, with the memory wrapping back
    to the beginning for address ranges beyond the physical limit actually
    installed, but the above claim is a new con on me.  Has anyone else
    encountered this?


    I've often noticed, the initial speed writing a large file is fast, but
    then slows dramatically after some time. I assumed some type of fast
    cache memory was used. Caches have always been a thing, and are
    generally sensible, although it is misleading to claim cache speed is
    the total write speed.

    Yes, but the difference in speed when writing to the cache or directly
    is brutal.

    Still, it would be better to use some software designed to test
    thumbdrives for this. So, for now, it is only a suspicion. Needs further testing/verification.

    --
    Cheers, Carlos.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Shepelev@21:1/5 to All on Fri Jan 27 15:03:43 2023
    XPost: alt.os.linux, alt.windows7.general

    Carlos E.R.:

    Still, it would be better to use some software designed to
    test thumbdrives for this. So, for now, it is only a
    suspicion. Needs further testing/verification.

    The speed of bad-o]block tests in USB-imaging software may be
    a reliable indicator. These tests include read-only and
    read-write tests with various patterns.

    --
    () ascii ribbon campaign -- against html e-mail
    /\ www.asciiribbon.org -- against proprietary attachments

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Pancho@21:1/5 to Java Jive on Fri Jan 27 11:20:57 2023
    XPost: alt.os.linux, alt.windows7.general

    On 27/01/2023 00:02, Java Jive wrote:
    Please excuse the Linux/Windows crosspost, this is a question about USB hardware relevant to both OSs!

    In a review raising the spectre of the VW emissions testing scandal all
    over again, an Amazon customer for this 128GB USB 3.1 drive  ...

    https://www.amazon.co.uk/dp/B08LG94ZR8/ref=twister_B08R5S2ZWG?th=1

    ... claims that the manufacturer has put fast chips at the beginning so
    that it passes review tests well, but fills up the memory beyond the beginning with cheaper very slow chips.

    Like everyone else, I've heard of USB sticks having fractions of the
    memory that they are specified to have, with the memory wrapping back to
    the beginning for address ranges beyond the physical limit actually installed, but the above claim is a new con on me.  Has anyone else encountered this?


    I've often noticed, the initial speed writing a large file is fast, but
    then slows dramatically after some time. I assumed some type of fast
    cache memory was used. Caches have always been a thing, and are
    generally sensible, although it is misleading to claim cache speed is
    the total write speed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to Java Jive on Fri Jan 27 12:19:28 2023
    XPost: alt.os.linux, alt.windows7.general

    Java Jive wrote:

    claims that the manufacturer has put fast chips at the beginning so that
    it passes review tests well, but fills up the memory beyond the
    beginning with cheaper very slow chips.

    The closest to that I've heard is some SSDs have a small area of SLC for initial writes and then the rest MLC/TLC/QLC, the SLC being faster and
    more durable

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From VanguardLH@21:1/5 to Java Jive on Fri Jan 27 09:51:07 2023
    XPost: alt.os.linux, alt.windows7.general

    Java Jive <java@evij.com.invalid> wrote:

    Please excuse the Linux/Windows crosspost, this is a question about USB hardware relevant to both OSs!

    In a review raising the spectre of the VW emissions testing scandal all
    over again, an Amazon customer for this 128GB USB 3.1 drive ...

    https://www.amazon.co.uk/dp/B08LG94ZR8/ref=twister_B08R5S2ZWG?th=1

    ... claims that the manufacturer has put fast chips at the beginning so
    that it passes review tests well, but fills up the memory beyond the beginning with cheaper very slow chips.

    Like everyone else, I've heard of USB sticks having fractions of the
    memory that they are specified to have, with the memory wrapping back to
    the beginning for address ranges beyond the physical limit actually installed, but the above claim is a new con on me. Has anyone else encountered this?

    Of the USB flash drives that I've dismantled (after they
    catastrophically fail due to exceeding the maximum write cycles), there
    is only one chip inside.

    https://www.usbmemorydirect.com/blog/wp-content/uploads/2021/08/USB-flash-drive-motherboard-1.jpg

    One chip is the for hardware interface between memory chip and the USB
    protocol interface (aka mass storage controller). The other chip (just
    one) is for the flash memory.

    If a brand and model doesn't provide a spec sheet listing the sustained
    read and write speeds, don't get that USB drive. There are lots of
    cheap and promotional USB drives, and they're crap for speed. I usually
    look at write speed, because read speed will be inherently faster.
    However, if you intend to write once to use the USB drive as archival
    storage, and since reads are non-destructive to flash memory, perhaps
    fast write speed isn't much of a concern to you other than the first
    time you add more files to the drive.

    If there are multi-chip flash memory USB drives, I haven't seen them;
    however, I don't buy crappy cheap no-name USB flash drives. Having to
    wave solder multiple flash chips on a PCB with pads with multiple chips
    would seem to be a more expensive manufacture process than having just
    one flash chip to mount on the PCB. If you cannot find complete specs, included read/write performances as bps or IOPs, on a drive, don't buy
    that drive. You're getting an unknown that is sold solely on capacity,
    and not on performance. Most consumers only look at capacity, and then
    later find performance sucks.

    Until the one making accusations provides proof, it looks more like
    someone dissatisfied with their purchase spewing FUD. Also sounds like
    someone that doesn't know the difference between burst or buffered mode
    and sustained mode.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E.R.@21:1/5 to Anton Shepelev on Fri Jan 27 13:41:50 2023
    XPost: alt.os.linux, alt.windows7.general

    On 2023-01-27 13:03, Anton Shepelev wrote:
    Carlos E.R.:

    Still, it would be better to use some software designed to
    test thumbdrives for this. So, for now, it is only a
    suspicion. Needs further testing/verification.

    The speed of bad-o]block tests in USB-imaging software may be
    a reliable indicator. These tests include read-only and
    read-write tests with various patterns.

    That burns the thing out. It is destructive.

    --
    Cheers, Carlos.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Java Jive@21:1/5 to VanguardLH on Fri Jan 27 20:10:11 2023
    XPost: alt.os.linux, alt.windows7.general

    On 27/01/2023 15:51, VanguardLH wrote:
    Java Jive <java@evij.com.invalid> wrote:

    Please excuse the Linux/Windows crosspost, this is a question about USB
    hardware relevant to both OSs!

    In a review raising the spectre of the VW emissions testing scandal all
    over again, an Amazon customer for this 128GB USB 3.1 drive ...

    https://www.amazon.co.uk/dp/B08LG94ZR8/ref=twister_B08R5S2ZWG?th=1

    ... claims that the manufacturer has put fast chips at the beginning so
    that it passes review tests well, but fills up the memory beyond the
    beginning with cheaper very slow chips.

    Like everyone else, I've heard of USB sticks having fractions of the
    memory that they are specified to have, with the memory wrapping back to
    the beginning for address ranges beyond the physical limit actually
    installed, but the above claim is a new con on me. Has anyone else
    encountered this?

    Of the USB flash drives that I've dismantled (after they
    catastrophically fail due to exceeding the maximum write cycles), there
    is only one chip inside.

    https://www.usbmemorydirect.com/blog/wp-content/uploads/2021/08/USB-flash-drive-motherboard-1.jpg

    One chip is the for hardware interface between memory chip and the USB protocol interface (aka mass storage controller). The other chip (just
    one) is for the flash memory.

    If a brand and model doesn't provide a spec sheet listing the sustained
    read and write speeds, don't get that USB drive. There are lots of
    cheap and promotional USB drives, and they're crap for speed. I usually
    look at write speed, because read speed will be inherently faster.
    However, if you intend to write once to use the USB drive as archival storage, and since reads are non-destructive to flash memory, perhaps
    fast write speed isn't much of a concern to you other than the first
    time you add more files to the drive.

    If there are multi-chip flash memory USB drives, I haven't seen them; however, I don't buy crappy cheap no-name USB flash drives. Having to
    wave solder multiple flash chips on a PCB with pads with multiple chips
    would seem to be a more expensive manufacture process than having just
    one flash chip to mount on the PCB. If you cannot find complete specs, included read/write performances as bps or IOPs, on a drive, don't buy
    that drive. You're getting an unknown that is sold solely on capacity,
    and not on performance. Most consumers only look at capacity, and then
    later find performance sucks.

    Until the one making accusations provides proof, it looks more like
    someone dissatisfied with their purchase spewing FUD. Also sounds like someone that doesn't know the difference between burst or buffered mode
    and sustained mode.

    Thanks for a most informative post. Your point about the extra
    complexity of multiple instead of single chips possibly increasing
    rather than reducing costs is reasonably convincing.

    I too have noticed that many USB sticks seem slower at writing large
    amounts of data, and I think the difference between cache and memory
    write speeds is the most likely explanation to be correct.

    --

    Fake news kills!

    I may be contacted via the contact address given on my website:
    www.macfh.co.uk

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Java Jive@21:1/5 to Java Jive on Fri Jan 27 20:01:47 2023
    XPost: alt.os.linux, alt.windows7.general

    On 27/01/2023 00:02, Java Jive wrote:

    Please excuse the Linux/Windows crosspost, this is a question about USB hardware relevant to both OSs!

    In a review raising the spectre of the VW emissions testing scandal all
    over again, an Amazon customer for this 128GB USB 3.1 drive  ...

    https://www.amazon.co.uk/dp/B08LG94ZR8/ref=twister_B08R5S2ZWG?th=1

    .... claims that the manufacturer has put fast chips at the beginning so
    that it passes review tests well, but fills up the memory beyond the beginning with cheaper very slow chips.

    Thanks for all the informative replies, all of which I've read.

    It seems to me that most probably what the reviewer was actually
    measuring was the difference between cache and memory write speeds.

    --

    Fake news kills!

    I may be contacted via the contact address given on my website:
    www.macfh.co.uk

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E.R.@21:1/5 to Java Jive on Fri Jan 27 22:24:55 2023
    XPost: alt.os.linux, alt.windows7.general

    On 2023-01-27 21:01, Java Jive wrote:
    On 27/01/2023 00:02, Java Jive wrote:

    Please excuse the Linux/Windows crosspost, this is a question about
    USB hardware relevant to both OSs!

    In a review raising the spectre of the VW emissions testing scandal
    all over again, an Amazon customer for this 128GB USB 3.1 drive  ...

    https://www.amazon.co.uk/dp/B08LG94ZR8/ref=twister_B08R5S2ZWG?th=1

    .... claims that the manufacturer has put fast chips at the beginning
    so that it passes review tests well, but fills up the memory beyond
    the beginning with cheaper very slow chips.

    Thanks for all the informative replies, all of which I've read.

    It seems to me that most probably what the reviewer was actually
    measuring was the difference between cache and memory write speeds.

    Dunno.

    Cache write is orders of magnitude faster than actual write. It is easy
    to notice.

    He mentions a "writing speeds of 15~17 MB" initially", and that is not
    cache speed, which would be at least in the hundreds, depending on
    available RAM.

    But he was not writing files, he was formatting (slow formatting). Is
    the cache caching that?

    I read the original text in Spanish, and he says that it goes reasonable
    fast for the first gigabyte or two, and then it stalled. He says he did
    not use "test tools" because they don't test the whole thumbdrive.


    In any case, the information is only grounds for further, proper
    investigation.

    --
    Cheers, Carlos.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ant@21:1/5 to Java Jive on Fri Jan 27 23:41:27 2023
    XPost: alt.os.linux, alt.windows7.general

    It's hard to find good reviews on these USB flash sticks. I still have
    old 128 MB that STILL work today.

    In alt.os.linux Java Jive <java@evij.com.invalid> wrote:
    Please excuse the Linux/Windows crosspost, this is a question about USB hardware relevant to both OSs!

    In a review raising the spectre of the VW emissions testing scandal all
    over again, an Amazon customer for this 128GB USB 3.1 drive ...

    https://www.amazon.co.uk/dp/B08LG94ZR8/ref=twister_B08R5S2ZWG?th=1

    ... claims that the manufacturer has put fast chips at the beginning so
    that it passes review tests well, but fills up the memory beyond the beginning with cheaper very slow chips.

    Like everyone else, I've heard of USB sticks having fractions of the
    memory that they are specified to have, with the memory wrapping back to
    the beginning for address ranges beyond the physical limit actually installed, but the above claim is a new con on me. Has anyone else encountered this?

    FTR, here's what he says in full (auto-translated from Spanish into
    English) ...

    "Cliente Amazon
    1.0 out of 5 stars Satisfaction and disappointment mixed
    Reviewed in Spain on 10 December 2019
    Verified Purchase

    As on previous occasions I had already happened with other Chinese manufacturers that the external pendrives or SSD disks not only did not fulfill what was promised but were also a complete deception, since they mixed internally two types of memory (fast at first and very slow afterwards), I decided to do a complete test for each of the 10
    pendrives. I will try to explain the process I performed as best as
    possible for less experienced users.

    I proceeded to format each pendrive by unchecking the ???Quick Format??? option, controlling the initial and final time to determine the actual
    write speed (which is what formatting basically does).

    The joys and disappointments in the process did not wait too long: some
    took about 8 to format (writing speeds of 15~17 MB. /sec., reasonably ???high??? for usb 2.0) and the rest between 17 and 21 minutes, which means that at most they reached typical usbs 1.0 speeds, with ???variable??? formatting times (nor did I want to do the speed calculations so as not
    to get bitter).

    Of course, with these pendrive it is true that ???random access???...

    What really annoyed me was, when I observed the speed of the process of formatting the ???slow??? pendrives, that I could see that at first the advance, in general, seemed ???normal??? for the first/second gigabytes and that then there was, visually, almost a stoppage in the speed of that
    advance until its completion. This, in my opinion, clearly indicates
    that the manufacturer (again, another manufacturer) has mixed chips with ???fast??? access along with very slow chips, placing access to the fastest ???at the beginning??? so that the usual test tools give ???good results???.

    That is why I, personally, [no longer] use the typical test tools that usually only verify a ???minimum??? part (just the ???initial???) of the total
    capacity of the external disks/pendrives. I prefer to take a little
    longer and perform my writing/formatting tests more comprehensively due
    to the bad experiences I have had with different memory manufacturers.

    However, in my tests none of the pendrives gave write errors: a simple
    method to know is to use chkdsk (in the command console) when the
    formatting is finished and check that the text ???bad sectors??? does not appear.

    Of the 10 pendrives, 7 had an acceptable speed (one took 10 minutes, but
    I ???accept it with resignation??? as passable) and 3 were horribly slow.

    As the overall price is reasonable, I don't dare complain about it, but
    I value it with only one star because of the ???unexpected surprises??? I have encountered.

    I'm sorry about the whole thing, but I think this is the only way to
    show the problem and try to help other less experienced users perform
    their own checks more guaranteably."


    --
    "But we have this treasure in jars of clay to show that this all-surpassing power is from God and not from us." --2 Corinthians 4:7. :) (L/C)NY 4721 [h2o black ????/(\_/)]! TGIF after passing out after 10:18 PM 4 >7h after a shreddy Th even wo new TV eps.
    Note: A fixed width font (Courier, Monospace, etc.) is required to see this signature correctly.
    /\___/\ Ant(Dude) @ http://aqfl.net & http://antfarm.home.dhs.org.
    / /\ /\ \ Please nuke ANT if replying by e-mail.
    | |o o| |
    \ _ /
    ( )

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to Ant on Sat Jan 28 00:36:31 2023
    XPost: alt.os.linux, alt.windows7.general

    On 1/27/2023 6:41 PM, Ant wrote:
    It's hard to find good reviews on these USB flash sticks. I still have
    old 128 MB that STILL work today.

    That's because it is SLC.

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ant@21:1/5 to Paul on Sat Jan 28 08:16:53 2023
    XPost: alt.os.linux, alt.windows7.general

    In alt.os.linux Paul <nospam@needed.invalid> wrote:
    On 1/27/2023 6:41 PM, Ant wrote:
    It's hard to find good reviews on these USB flash sticks. I still have
    old 128 MB that STILL work today.

    That's because it is SLC.

    https://www.kingston.com/en/blog/pc-performance/difference-between-slc-mlc-tlc-3d-nand
    was an interesting read.
    --
    "I pray that out of his glorious riches he may strengthen you with power through his Spirit in your inner being." --Ephesians 3:16. Wandering Earth 1 was a meh, but will 2 B better?
    Note: A fixed width font (Courier, Monospace, etc.) is required to see this signature correctly.
    /\___/\ Ant(Dude) @ http://aqfl.net & http://antfarm.home.dhs.org.
    / /\ /\ \ Please nuke ANT if replying by e-mail.
    | |o o| |
    \ _ /
    ( )

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to Ant on Sat Jan 28 03:43:25 2023
    XPost: alt.os.linux, alt.windows7.general

    On 1/28/2023 3:16 AM, Ant wrote:
    In alt.os.linux Paul <nospam@needed.invalid> wrote:
    On 1/27/2023 6:41 PM, Ant wrote:
    It's hard to find good reviews on these USB flash sticks. I still have
    old 128 MB that STILL work today.

    That's because it is SLC.

    https://www.kingston.com/en/blog/pc-performance/difference-between-slc-mlc-tlc-3d-nand
    was an interesting read.


    Micron made a 32GB SLC flash chip.

    But it appears to be a tease and does not
    seem to ship in a USB stick.

    Some person in China took a picture of a
    USB3 stick, with the 32GB SLC flash chip
    fitted to it. But it appeared to be for the
    purpose of selling some other inferior product.
    I doubt they ever had stock of those sticks,
    just a prototype or a mockup of one.

    There is a smaller company that makes SLC flash,
    but their flash chips are the small capacity ones. And the pricing
    is what you would expect for such a thing. No matter
    what the capacity, they're $100+. You sometimes see
    them listed as "industrial USB stick" on the electronics
    company sites.

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ravi Kapoor@21:1/5 to Ant on Sat Jan 28 20:30:30 2023
    XPost: alt.windows7.general, alt.os.linux

    On 28/01/2023 08:16, Ant wrote:
    In alt.os.linux Paul <nospam@needed.invalid> wrote:
    On 1/27/2023 6:41 PM, Ant wrote:
    It's hard to find good reviews on these USB flash sticks. I still have
    old 128 MB that STILL work today.
    That's because it is SLC.
    https://www.kingston.com/en/blog/pc-performance/difference-between-slc-mlc-tlc-3d-nand
    was an interesting read.
    Have the flash industry managed to sort out the size of 1GB yet? I
    bought a 256GB flash drive sometime ago but it had only 231GB. People
    told me that it is because nobody uses correct measurement yet. Is 1GB =
    1000MB or is it 1024MB. I have not been able to reconcile how 256GB is
    only 231GB when reformatted as NTFS or exFAT (default). I lost 25GB unnecessarily.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E.R.@21:1/5 to Ravi Kapoor on Sat Jan 28 22:51:28 2023
    XPost: alt.windows7.general, alt.os.linux

    On 2023-01-28 21:30, Ravi Kapoor wrote:
    On 28/01/2023 08:16, Ant wrote:
    In alt.os.linux Paul <nospam@needed.invalid> wrote:
    On 1/27/2023 6:41 PM, Ant wrote:
    It's hard to find good reviews on these USB flash sticks. I still have >>>> old 128 MB that STILL work today.
    That's because it is SLC.
    https://www.kingston.com/en/blog/pc-performance/difference-between-slc-mlc-tlc-3d-nand
    was an interesting read.
    Have the flash industry managed to sort out the size of 1GB yet? I
    bought a 256GB flash drive sometime ago but it had only 231GB. People
    told me that it is because nobody uses correct measurement yet. Is 1GB = 1000MB or is it 1024MB. I have not been able to reconcile how 256GB is
    only 231GB when reformatted as NTFS or exFAT (default). I lost 25GB unnecessarily.

    There is no confusion at all, except for you. You have to study and
    learn the different units.

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB.

    <https://en.wikipedia.org/wiki/Byte#Multiple-byte_units>

    <https://en.wikipedia.org/wiki/Gigabyte#Consumer_confusion>

    --
    Cheers, Carlos.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Java Jive@21:1/5 to Ravi Kapoor on Sat Jan 28 23:06:07 2023
    XPost: alt.windows7.general, alt.os.linux

    On 28/01/2023 22:30, Ravi Kapoor wrote:

    On 28/01/2023 21:51, Carlos E.R. wrote:

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB.

    <https://en.wikipedia.org/wiki/Byte#Multiple-byte_units>

    <https://en.wikipedia.org/wiki/Gigabyte#Consumer_confusion>

    So you are very clever here. How do you work out 256GB = 231GB? Use your figures to demonstrate this, Mr Clever.

    (256 x 1000) / 1024 = 250

    How does 250 compare with 231? You can reconcile the figures for us to understand better.

    'The industry', whatever exactly that means, may say one thing, but what
    really matters is what people do, and the fact remains that I rarely see
    GiB in practice, and OSs do one thing while disk manufacturers do another.

    In OSs, and BTW this is also what I was taught in academia, ...
    1KB = 1024 bytes
    1MB = 1024 KB
    1GB = 1024 MB
    ... but to storage manufacturers ...
    1KB = 1000 bytes
    1MB = 1000 of their KB
    1GB = 1000 of their MB

    Hence to convert disk manufacturers' GB figures to OS' GB figures, you
    have to multiply by (1000/1024)^3, giving ...
    256 (manufacturer) GB x 1000^3 / 1024^3 = 238 (OS) GB

    Then there will be some file system overhead to deduct after that.

    --

    Fake news kills!

    I may be contacted via the contact address given on my website:
    www.macfh.co.uk

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ken Blake@21:1/5 to invalid@invalid.invalid on Sat Jan 28 16:00:03 2023
    XPost: alt.windows7.general, alt.os.linux

    On Sat, 28 Jan 2023 20:30:30 +0000, Ravi Kapoor
    <invalid@invalid.invalid> wrote:

    On 28/01/2023 08:16, Ant wrote:
    In alt.os.linux Paul <nospam@needed.invalid> wrote:
    On 1/27/2023 6:41 PM, Ant wrote:
    It's hard to find good reviews on these USB flash sticks. I still have >>>> old 128 MB that STILL work today.
    That's because it is SLC.
    https://www.kingston.com/en/blog/pc-performance/difference-between-slc-mlc-tlc-3d-nand
    was an interesting read.
    Have the flash industry managed to sort out the size of 1GB yet? I
    bought a 256GB flash drive sometime ago but it had only 231GB. People
    told me that it is because nobody uses correct measurement yet. Is 1GB = >1000MB or is it 1024MB. I have not been able to reconcile how 256GB is
    only 231GB when reformatted as NTFS or exFAT (default). I lost 25GB >unnecessarily.


    It has nothing to do with the flash industry. All hard drive
    manufacturers define 1TB as 1,000,000,000,000 bytes, while the rest of
    the computer world, including Windows, defines it as 2 to the 40th
    power (1,099,511,637,776) bytes. So a 5 trillion byte drive is
    actually around 4.5TB.
    Some people point out that the official international standard defines
    the "T" of TB as one trillion, not1,099,511,637,776. Correct though
    they are, using the binary value of TB is so well established in the
    computer world that I consider using the decimal value of a trillion
    to be deceptive marketing on the part of the hard drive manufacturer.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ravi Kapoor@21:1/5 to Carlos E.R. on Sat Jan 28 22:30:32 2023
    XPost: alt.windows7.general, alt.os.linux

    On 28/01/2023 21:51, Carlos E.R. wrote:
    On 2023-01-28 21:30, Ravi Kapoor wrote:
    On 28/01/2023 08:16, Ant wrote:
    In alt.os.linux Paul <nospam@needed.invalid> wrote:
    On 1/27/2023 6:41 PM, Ant wrote:
    It's hard to find good reviews on these USB flash sticks. I still
    have
    old 128 MB that STILL work today.
    That's because it is SLC.
    https://www.kingston.com/en/blog/pc-performance/difference-between-slc-mlc-tlc-3d-nand

    was an interesting read.
    Have the flash industry managed to sort out the size of 1GB yet? I
    bought a 256GB flash drive sometime ago but it had only 231GB. People
    told me that it is because nobody uses correct measurement yet. Is 1GB =
    1000MB or is it 1024MB. I have not been able to reconcile how 256GB is
    only 231GB when reformatted as NTFS or exFAT (default). I lost 25GB
    unnecessarily.

    There is no confusion at all, except for you. You have to study and
    learn the different units.

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB.

    <https://en.wikipedia.org/wiki/Byte#Multiple-byte_units>

    <https://en.wikipedia.org/wiki/Gigabyte#Consumer_confusion>

    So you are very clever here. How do you work out 256GB = 231GB? Use your figures to demonstrate this, Mr Clever.

    (256 x 1000) / 1024 = 250

    How does 250 compare with 231? You can reconcile the figures for us to understand better.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E.R.@21:1/5 to Ravi Kapoor on Sun Jan 29 00:38:29 2023
    XPost: alt.windows7.general, alt.os.linux

    On 2023-01-28 23:30, Ravi Kapoor wrote:
    On 28/01/2023 21:51, Carlos E.R. wrote:
    On 2023-01-28 21:30, Ravi Kapoor wrote:
    On 28/01/2023 08:16, Ant wrote:
    In alt.os.linux Paul <nospam@needed.invalid> wrote:
    On 1/27/2023 6:41 PM, Ant wrote:
    It's hard to find good reviews on these USB flash sticks. I still
    have
    old 128 MB that STILL work today.
    That's because it is SLC.
    https://www.kingston.com/en/blog/pc-performance/difference-between-slc-mlc-tlc-3d-nand
    was an interesting read.
    Have the flash industry managed to sort out the size of 1GB yet? I
    bought a 256GB flash drive sometime ago but it had only 231GB. People
    told me that it is because nobody uses correct measurement yet. Is 1GB = >>> 1000MB or is it 1024MB. I have not been able to reconcile how 256GB is
    only 231GB when reformatted as NTFS or exFAT (default). I lost 25GB
    unnecessarily.

    There is no confusion at all, except for you. You have to study and
    learn the different units.

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB.

    <https://en.wikipedia.org/wiki/Byte#Multiple-byte_units>

    <https://en.wikipedia.org/wiki/Gigabyte#Consumer_confusion>

    So you are very clever here. How do you work out 256GB = 231GB? Use your figures to demonstrate this, Mr Clever.

    (256 x 1000) / 1024 = 250

    256×10^9÷(1024^3) = 238,418579102

    That is, 256 GB = 238 GiB. The rest is used in the filesystem overhead.


    How does 250 compare with 231? You can reconcile the figures for us to understand better.


    --
    Cheers, Carlos.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to Carlos E.R. on Sat Jan 28 19:17:43 2023
    XPost: alt.windows7.general, alt.os.linux

    On 1/28/2023 4:51 PM, Carlos E.R. wrote:
    On 2023-01-28 21:30, Ravi Kapoor wrote:
    On 28/01/2023 08:16, Ant wrote:
    In alt.os.linux Paul <nospam@needed.invalid> wrote:
    On 1/27/2023 6:41 PM, Ant wrote:
    It's hard to find good reviews on these USB flash sticks. I still have >>>>> old 128 MB that STILL work today.
    That's because it is SLC.
    https://www.kingston.com/en/blog/pc-performance/difference-between-slc-mlc-tlc-3d-nand
    was an interesting read.
    Have the flash industry managed to sort out the size of 1GB yet? I
    bought a 256GB flash drive sometime ago but it had only 231GB. People
    told me that it is because nobody uses correct measurement yet. Is 1GB =
    1000MB or is it 1024MB. I have not been able to reconcile how 256GB is
    only 231GB when reformatted as NTFS or exFAT (default). I lost 25GB
    unnecessarily.

    There is no confusion at all, except for you. You have to study and learn the different units.

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB.

    <https://en.wikipedia.org/wiki/Byte#Multiple-byte_units>

    <https://en.wikipedia.org/wiki/Gigabyte#Consumer_confusion>


    The legends on packaging and on computer screens can be wrong,
    so the blame is not always with the user. They switch between GB and GiB
    in a willy-nilly fashion.

    If I pipe dd to wc -c , I can count the bytes that way :-)

    When I use slightly more efficient methods, this is what I get

    "128GB" stick = 128,983,236,608 bytes [they are using the decimal method like HDD do]
    "1GB" stick = 1,006,108,672 bytes [they are using the decimal method like HDD do]

    Then the computer screen does the math for GiB, but prints "GB" next
    to the resulting number. Unnecessarily scaring the user.

    The above numbers, do not involve any file systems, so these sample
    numbers are not a side effect of "formatting". These numbers
    are collected from the physical layer.

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Char Jackson@21:1/5 to All on Sat Jan 28 18:27:20 2023
    XPost: alt.windows7.general, alt.os.linux

    On Sat, 28 Jan 2023 23:06:07 +0000, Java Jive <java@evij.com.invalid>
    wrote:

    On 28/01/2023 22:30, Ravi Kapoor wrote:

    On 28/01/2023 21:51, Carlos E.R. wrote:

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB.

    <https://en.wikipedia.org/wiki/Byte#Multiple-byte_units>

    <https://en.wikipedia.org/wiki/Gigabyte#Consumer_confusion>

    So you are very clever here. How do you work out 256GB = 231GB? Use your
    figures to demonstrate this, Mr Clever.

    (256 x 1000) / 1024 = 250

    How does 250 compare with 231? You can reconcile the figures for us to
    understand better.

    'The industry', whatever exactly that means, may say one thing, but what >really matters is what people do, and the fact remains that I rarely see
    GiB in practice, and OSs do one thing while disk manufacturers do another.

    In OSs, and BTW this is also what I was taught in academia, ...
    1KB = 1024 bytes
    1MB = 1024 KB
    1GB = 1024 MB
    ... but to storage manufacturers ...
    1KB = 1000 bytes
    1MB = 1000 of their KB
    1GB = 1000 of their MB

    Hence to convert disk manufacturers' GB figures to OS' GB figures, you
    have to multiply by (1000/1024)^3, giving ...
    256 (manufacturer) GB x 1000^3 / 1024^3 = 238 (OS) GB

    Then there will be some file system overhead to deduct after that.

    Thanks for providing the actual formula. All these years I just use 93%
    and call it good. The formula would put it at 93.1322574615478515625%.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Purgert@21:1/5 to Java Jive on Sun Jan 29 12:28:40 2023
    XPost: alt.windows7.general, alt.os.linux

    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA512

    ["Followup-To:" header set to alt.os.linux.]
    On 2023-01-28, Java Jive wrote:
    On 28/01/2023 22:30, Ravi Kapoor wrote:

    On 28/01/2023 21:51, Carlos E.R. wrote:

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB.

    <https://en.wikipedia.org/wiki/Byte#Multiple-byte_units>

    <https://en.wikipedia.org/wiki/Gigabyte#Consumer_confusion>

    So you are very clever here. How do you work out 256GB = 231GB? Use your
    figures to demonstrate this, Mr Clever.

    (256 x 1000) / 1024 = 250

    How does 250 compare with 231? You can reconcile the figures for us to
    understand better.

    'The industry', whatever exactly that means, may say one thing, but what really matters is what people do, and the fact remains that I rarely see
    GiB in practice, and OSs do one thing while disk manufacturers do another.

    In OSs, and BTW this is also what I was taught in academia, ...
    1KB = 1024 bytes
    1MB = 1024 KB
    1GB = 1024 MB
    ... but to storage manufacturers ...
    1KB = 1000 bytes
    1MB = 1000 of their KB
    1GB = 1000 of their MB

    Sure, up to about 20 years ago, or thereabouts, culminating in a 2008
    IEC standard (IEC 80000-13:2008) to use the base-2 powers (kibi, mebi,
    etc.). This was in part due to general confusion that the SI prefixes
    really do mean base-10; and the growing disparity between base-2 and
    base-10 --- as I recall 2.5% at kilo/kibi; and increasing every marked
    order of magnitude (so 5% at mebi/mega ; 7.5% at gibi/giga ; 10% at
    tebi/tera, and so on).

    "Marketing" will quite likely continue to use base-10, because it makes
    the numbers bigger. I was hopeful that with the move to base-2
    numbering for SSDs, we were gonna see a move in accuracy on the box, but
    that didn't happen :(


    -----BEGIN PGP SIGNATURE-----

    iQIzBAEBCgAdFiEE3asj+xn6fYUcweBnbWVw5UznKGAFAmPWZrIACgkQbWVw5Uzn KGAZwBAAoFCHB5DeYc5TALvZVY9N0KeMSUuxDHSmPVZrgXTyqDYnDJt1ZCml2+9e mmNUTSDCzIFA/FSJ6pHDpydRs7hKlKN8jaNhDCNLJmA7/Y1TAuSGV8qejRBk7stE TyOgtHoNtiImebWgQEI3S2oSVikkMtzwByDJaE9pQNh4q3MaCL5sBBolu1zc1A1w sVvCvgRGcK3XYY6TWTSxYFKn7XACxfBMjTBU5SOYjVVqXvPYFfgOy69IklP5rC3Y Y0pzHpXN3/LKWevn8r2BYXV0lRI0t1j/DrbJFl1oWCyhd3cUMU+tGT0NDrlpv2xi WkWGpWgV6+cC3Xx+jN58A5vileXjJ/oVzGcSt2HvXR2uAOZfLQ17kf0ttwopPX91 U0ssaNxmvg14erYa7GvHpfCDPCn47M5O5cXDH3mKw0CCk8AULsM+U+yie54tC/c3 9HOunaaanjeDMgk5wAhQmFg9i/YI7FjiOvTcDlFjlO9sqvC8vZuzx5D+w7zuFslf uynbimYMd4DdYau5D/TMA9NRxSYBpVFIj4I9hiY6ftwEeleLFoxLoYRRUlZo4Qvc Sw72dprHztW1zAVtOCLZ0RF48ibxOxFo118/1K/h7Bv/6uSB+M6LWYIzNsJSn5Sa +dhd2XnfprjYbSvvT9P4D6BqXhOoLDF/JsILRCDhtLUji53/5HM=
    =sWFx
    -----END PGP SIGNATURE-----

    --
    |_|O|_|
    |_|_|O| Github: https://github.com/dpurgert
    |O|O|O| PGP: DDAB 23FB 19FA 7D85 1CC1 E067 6D65 70E5 4CE7 2860

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E.R.@21:1/5 to Paul on Sun Jan 29 10:42:51 2023
    XPost: alt.windows7.general, alt.os.linux

    On 2023-01-29 01:17, Paul wrote:
    On 1/28/2023 4:51 PM, Carlos E.R. wrote:
    On 2023-01-28 21:30, Ravi Kapoor wrote:
    On 28/01/2023 08:16, Ant wrote:
    In alt.os.linux Paul <nospam@needed.invalid> wrote:
    On 1/27/2023 6:41 PM, Ant wrote:
    It's hard to find good reviews on these USB flash sticks. I still
    have
    old 128 MB that STILL work today.
    That's because it is SLC.
    https://www.kingston.com/en/blog/pc-performance/difference-between-slc-mlc-tlc-3d-nand
    was an interesting read.
    Have the flash industry managed to sort out the size of 1GB yet? I
    bought a 256GB flash drive sometime ago but it had only 231GB. People
    told me that it is because nobody uses correct measurement yet. Is 1GB = >>> 1000MB or is it 1024MB. I have not been able to reconcile how 256GB is
    only 231GB when reformatted as NTFS or exFAT (default). I lost 25GB
    unnecessarily.

    There is no confusion at all, except for you. You have to study and
    learn the different units.

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB.

    <https://en.wikipedia.org/wiki/Byte#Multiple-byte_units>

    <https://en.wikipedia.org/wiki/Gigabyte#Consumer_confusion>


    The legends on packaging and on computer screens can be wrong,
    so the blame is not always with the user. They switch between GB and GiB
    in a willy-nilly fashion.

    Yes, that's true, unfortunately. But once you are aware that there are
    two different unit multipliers, and that some use confusing legends, you
    can figure out which they are using.


    If I pipe dd to wc -c , I can count the bytes that way :-)

    When I use slightly more efficient methods, this is what I get

    "128GB" stick = 128,983,236,608 bytes    [they are using the decimal method like HDD do]
    "1GB" stick   =   1,006,108,672 bytes    [they are using the decimal method like HDD do]

    Then the computer screen does the math for GiB, but prints "GB" next
    to the resulting number. Unnecessarily scaring the user.

    The October 2021 version of the dd manual says:

    N and BYTES may be followed by the following multiplicative suffixes: c=1, w=2, b=512, kB=1000, K=1024, MB=1000*1000, M=1024*1024,
    xM=M, GB=1000*1000*1000, G=1024*1024*1024, and so on for T, P, E, Z, Y.
    Binary prefixes can be used, too: KiB=K, MiB=M, and so on.

    which is not what they were doing previously. The February 2018 edition
    says instead:

    N and BYTES may be followed by the following
    multiplicative suffixes: c =1, w =2, b =512, kB =1000, K =1024, MB
    =1000*1000, M =1024*1024, xM =M, GB =1000*1000*1000, G =1024*1024*1024,
    and so on for T, P, E, Z, Y.


    The new version does:

    cer@Telcontar:~> dd if=/dev/zero of=test.dd count=1000 bs=1000
    1000+0 records in
    1000+0 records out
    1000000 bytes (1,0 MB, 977 KiB) copied, 0,00394632 s, 253 MB/s
    cer@Telcontar:~> dd if=/dev/zero of=test.dd count=1024 bs=1024
    1024+0 records in
    1024+0 records out
    1048576 bytes (1,0 MB, 1,0 MiB) copied, 0,00561859 s, 187 MB/s
    cer@Telcontar:~>

    cer@Telcontar:~> dd if=/dev/zero of=test.dd count=1000 bs=1000000
    1000+0 records in
    1000+0 records out
    1000000000 bytes (1,0 GB, 954 MiB) copied, 0,499733 s, 2,0 GB/s cer@Telcontar:~>




    The old version does:

    cer@Isengard:~> dd if=/dev/zero of=test.dd count=1000 bs=1000
    1000+0 records in
    1000+0 records out
    1000000 bytes (1.0 MB, 977 KiB) copied, 0.0131062 s, 76.3 MB/s
    cer@Isengard:~> dd if=/dev/zero of=test.dd count=1024 bs=1024
    1024+0 records in
    1024+0 records out
    1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133418 s, 78.6 MB/s
    cer@Isengard:~>

    cer@Isengard:~> dd if=/dev/zero of=test.dd count=1000 bs=1000000
    1000+0 records in
    1000+0 records out
    1000000000 bytes (1.0 GB, 954 MiB) copied, 2.66717 s, 375 MB/s
    cer@Isengard:~>

    At this moment I don't see how to choose units.



    The above numbers, do not involve any file systems, so these sample
    numbers are not a side effect of "formatting". These numbers
    are collected from the physical layer.

    Yes.

    --
    Cheers, Carlos.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ken Blake@21:1/5 to robin_listas@es.invalid on Sun Jan 29 07:27:38 2023
    XPost: alt.windows7.general, alt.os.linux

    On Sat, 28 Jan 2023 22:51:28 +0100, "Carlos E.R."
    <robin_listas@es.invalid> wrote:

    On 2023-01-28 21:30, Ravi Kapoor wrote:
    On 28/01/2023 08:16, Ant wrote:
    In alt.os.linux Paul <nospam@needed.invalid> wrote:
    On 1/27/2023 6:41 PM, Ant wrote:
    It's hard to find good reviews on these USB flash sticks. I still have >>>>> old 128 MB that STILL work today.
    That's because it is SLC.
    https://www.kingston.com/en/blog/pc-performance/difference-between-slc-mlc-tlc-3d-nand
    was an interesting read.
    Have the flash industry managed to sort out the size of 1GB yet? I
    bought a 256GB flash drive sometime ago but it had only 231GB. People
    told me that it is because nobody uses correct measurement yet. Is 1GB =
    1000MB or is it 1024MB. I have not been able to reconcile how 256GB is
    only 231GB when reformatted as NTFS or exFAT (default). I lost 25GB
    unnecessarily.

    There is no confusion at all, except for you. You have to study and
    learn the different units.

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB.


    I don't agree. It may be clear to you and to me, and to others here,
    but most people have never heard of GiB or MiB.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ken Blake@21:1/5 to All on Sun Jan 29 07:24:20 2023
    XPost: alt.windows7.general, alt.os.linux

    On Sat, 28 Jan 2023 23:06:07 +0000, Java Jive <java@evij.com.invalid>
    wrote:

    On 28/01/2023 22:30, Ravi Kapoor wrote:

    On 28/01/2023 21:51, Carlos E.R. wrote:

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB.

    <https://en.wikipedia.org/wiki/Byte#Multiple-byte_units>

    <https://en.wikipedia.org/wiki/Gigabyte#Consumer_confusion>

    So you are very clever here. How do you work out 256GB = 231GB? Use your
    figures to demonstrate this, Mr Clever.

    (256 x 1000) / 1024 = 250

    How does 250 compare with 231? You can reconcile the figures for us to
    understand better.

    'The industry', whatever exactly that means, may say one thing, but what >really matters is what people do, and the fact remains that I rarely see
    GiB in practice, and OSs do one thing while disk manufacturers do another.


    Yes to everything in that paragraph.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E.R.@21:1/5 to Ken Blake on Sun Jan 29 18:43:57 2023
    XPost: alt.windows7.general, alt.os.linux

    On 2023-01-29 15:27, Ken Blake wrote:
    On Sat, 28 Jan 2023 22:51:28 +0100, "Carlos E.R." <robin_listas@es.invalid> wrote:
    On 2023-01-28 21:30, Ravi Kapoor wrote:
    On 28/01/2023 08:16, Ant wrote:
    In alt.os.linux Paul <nospam@needed.invalid> wrote:
    On 1/27/2023 6:41 PM, Ant wrote:
    It's hard to find good reviews on these USB flash sticks. I still have >>>>>> old 128 MB that STILL work today.
    That's because it is SLC.
    https://www.kingston.com/en/blog/pc-performance/difference-between-slc-mlc-tlc-3d-nand
    was an interesting read.
    Have the flash industry managed to sort out the size of 1GB yet? I
    bought a 256GB flash drive sometime ago but it had only 231GB. People
    told me that it is because nobody uses correct measurement yet. Is 1GB = >>> 1000MB or is it 1024MB. I have not been able to reconcile how 256GB is
    only 231GB when reformatted as NTFS or exFAT (default). I lost 25GB
    unnecessarily.

    There is no confusion at all, except for you. You have to study and
    learn the different units.

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB.


    I don't agree. It may be clear to you and to me, and to others here,
    but most people have never heard of GiB or MiB.

    Then learn.

    The computer industry have done it wrong for decades (looking at
    Microsoft and others).

    --
    Cheers, Carlos.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Daniel65@21:1/5 to Carlos E.R. on Mon Jan 30 21:17:02 2023
    XPost: alt.windows7.general, alt.os.linux

    Carlos E.R. wrote on 29/1/23 8:42 pm:

    <Snip>

    The October 2021 version of the dd manual says:

           N  and  BYTES  may be followed by the following multiplicative
    suffixes: c=1, w=2, b=512, kB=1000, K=1024, MB=1000*1000, M=1024*1024,
    xM=M, GB=1000*1000*1000, G=1024*1024*1024, and so on for T,  P, E, Z, Y.
     Binary prefixes can be used, too: KiB=K, MiB=M, and so on.

    which is not what they were doing previously. The February 2018 edition
    says instead:

           N  and  BYTES  may  be  followed  by the following multiplicative suffixes: c =1, w =2, b =512, kB =1000, K =1024, MB =1000*1000, M =1024*1024, xM =M, GB =1000*1000*1000, G =1024*1024*1024,
    and  so on for T, P, E, Z, Y.

    Hmm!! Might be Picky! Picky! but shouldn't ....

    "N and BYTES may be followed by the following multiplicative
    suffixes: ...."

    really be ....

    "N and BYTES may be *preceded* by the following multiplicative
    suffixes: ...."

    ??
    --
    Daniel

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E.R.@21:1/5 to All on Mon Jan 30 11:58:18 2023
    XPost: alt.windows7.general, alt.os.linux

    On 2023-01-30 11:17, Daniel65 wrote:
    Carlos E.R. wrote on 29/1/23 8:42 pm:

    <Snip>

    The October 2021 version of the dd manual says:

            N  and  BYTES  may be followed by the following multiplicative
    suffixes: c=1, w=2, b=512, kB=1000, K=1024, MB=1000*1000, M=1024*1024,
    xM=M, GB=1000*1000*1000, G=1024*1024*1024, and so on for T,  P, E, Z,
    Y.   Binary prefixes can be used, too: KiB=K, MiB=M, and so on.

    which is not what they were doing previously. The February 2018
    edition says instead:

            N  and  BYTES  may  be  followed  by the following
    multiplicative suffixes: c =1, w =2, b =512, kB =1000, K =1024, MB
    =1000*1000, M =1024*1024, xM =M, GB =1000*1000*1000, G
    =1024*1024*1024, and  so on for T, P, E, Z, Y.

    Hmm!! Might be Picky! Picky! but shouldn't ....

    "N  and  BYTES  may be followed by the following multiplicative
    suffixes: ...."

    really be ....

    "N  and  BYTES  may be *preceded* by the following multiplicative suffixes: ...."

    ??


    Tell them :-)

    --
    Cheers, Carlos.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Daniel65@21:1/5 to Ken Blake on Mon Jan 30 21:27:18 2023
    XPost: alt.windows7.general, alt.os.linux

    Ken Blake wrote on 30/1/23 1:27 am:
    On Sat, 28 Jan 2023 22:51:28 +0100, "Carlos E.R."
    <robin_listas@es.invalid> wrote:
    On 2023-01-28 21:30, Ravi Kapoor wrote:
    On 28/01/2023 08:16, Ant wrote:
    In alt.os.linux Paul <nospam@needed.invalid> wrote:
    On 1/27/2023 6:41 PM, Ant wrote:
    It's hard to find good reviews on these USB flash sticks. I
    still have old 128 MB that STILL work today.
    That's because it is SLC.
    https://www.kingston.com/en/blog/pc-performance/difference-between-slc-mlc-tlc-3d-nand

    was an interesting read.
    Have the flash industry managed to sort out the size of 1GB yet?
    I bought a 256GB flash drive sometime ago but it had only 231GB.
    People told me that it is because nobody uses correct measurement
    yet. Is 1GB = 1000MB or is it 1024MB. I have not been able to
    reconcile how 256GB is only 231GB when reformatted as NTFS or
    exFAT (default). I lost 25GB unnecessarily.

    There is no confusion at all, except for you. You have to study
    and learn the different units.

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000
    MB.

    I don't agree. It may be clear to you and to me, and to others here,
    but most people have never heard of GiB or MiB.

    I've heard/read of them both .... but am never confident which is
    Decimal and which is Binary. At times I think it varies dependant on who
    is speaking!!
    --
    Daniel

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Purgert@21:1/5 to All on Mon Jan 30 12:01:53 2023
    XPost: alt.windows7.general, alt.os.linux

    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA512

    ["Followup-To:" header set to alt.os.linux.]
    On 2023-01-30, Daniel65 wrote:
    Carlos E.R. wrote on 29/1/23 8:42 pm:

    <Snip>

    The October 2021 version of the dd manual says:

           N  and  BYTES  may be followed by the following multiplicative
    suffixes: c=1, w=2, b=512, kB=1000, K=1024, MB=1000*1000, M=1024*1024,
    xM=M, GB=1000*1000*1000, G=1024*1024*1024, and so on for T,  P, E, Z, Y.
     Binary prefixes can be used, too: KiB=K, MiB=M, and so on.

    which is not what they were doing previously. The February 2018 edition
    says instead:

           N  and  BYTES  may  be  followed  by the following
    multiplicative suffixes: c =1, w =2, b =512, kB =1000, K =1024, MB
    =1000*1000, M =1024*1024, xM =M, GB =1000*1000*1000, G =1024*1024*1024,
    and  so on for T, P, E, Z, Y.

    Hmm!! Might be Picky! Picky! but shouldn't ....

    "N and BYTES may be followed by the following multiplicative
    suffixes: ...."

    really be ....

    "N and BYTES may be *preceded* by the following multiplicative
    suffixes: ...."

    No, the original language is correct. You don't write "MB100" .


    -----BEGIN PGP SIGNATURE-----

    iQIzBAEBCgAdFiEE3asj+xn6fYUcweBnbWVw5UznKGAFAmPXsesACgkQbWVw5Uzn KGAbfQ//ZOBI5D4oYi9QIeDZUcuOlewOlSjBcD3vJRFwzIVYH1k5JTt2VSG7Zrcr VqLhO0qxBUYJ10e/GI7jyi6P8zNucyEqm1vZxVa6E7eLq5ONolpehALL8hVx7hWG zJCpgz9lojnYj5grr2pVKApQH+Llu5dX3suGvQL+mIQi9V3outvzRkL94ILH6BHE AVDXNuzJGVdVZCBH0tLOVau/U1tpzvKmhiAK6sVgFXSeyD75k3wZxnUUnt8exQDH paink2nRO/K0Dx0eZ4ZGv9KS3VZGkeg0OE+NJgT+MVwQkoUktMlXY0weE8JYYuhK PBROG0obMEopl2/48Rdw8F0/PumBVlulD3kZSOUwGD6Rv9NdfzUHmX5yq8f9GSfU /IzQ6lLrC9o/MHSfRnXq6DaCchJLjNT1bbeV+uH9igQFpVA2YJj3FHFbyTg2n685 uRDc+m3Wt29UdtD0dH/eWSeIQM2Ee4ZzmLui3Zebr7dRNaP+hyNHcLIxx+0meg3i awj/A8jgdnHcO6P3xdocJH2jRszWH4BpR+0n6c1QSnw1LoPPcFM/oaYfKA2PwtKR JvxPw2labdnclM4N/W6TyDvzWtYZkbEY2ApiH/FtaDKeotFxxKnyUtU35B7qzG7/ iCaAOG+ugwosL624m4r9rRoLu17wXge1rlbr9knkMmmvpLq1HF4=
    =g6s5
    -----END PGP SIGNATURE-----

    --
    |_|O|_|
    |_|_|O| Github: https://github.com/dpurgert
    |O|O|O| PGP: DDAB 23FB 19FA 7D85 1CC1 E067 6D65 70E5 4CE7 2860

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Daniel65@21:1/5 to Carlos E.R. on Mon Jan 30 23:28:31 2023
    XPost: alt.windows7.general, alt.os.linux

    Carlos E.R. wrote on 30/1/23 9:58 pm:
    On 2023-01-30 11:17, Daniel65 wrote:
    Carlos E.R. wrote on 29/1/23 8:42 pm:

    <Snip>

    The October 2021 version of the dd manual says:

            N  and  BYTES  may be followed by the following
    multiplicative suffixes: c=1, w=2, b=512, kB=1000, K=1024,
    MB=1000*1000, M=1024*1024, xM=M, GB=1000*1000*1000, G=1024*1024*1024,
    and so on for T,  P, E, Z, Y.   Binary prefixes can be used, too:
    KiB=K, MiB=M, and so on.

    which is not what they were doing previously. The February 2018
    edition says instead:

            N  and  BYTES  may  be  followed  by the following
    multiplicative suffixes: c =1, w =2, b =512, kB =1000, K =1024, MB
    =1000*1000, M =1024*1024, xM =M, GB =1000*1000*1000, G
    =1024*1024*1024, and  so on for T, P, E, Z, Y.

    Hmm!! Might be Picky! Picky! but shouldn't ....

    "N  and  BYTES  may be followed by the following multiplicative
    suffixes: ...."

    really be ....

    "N  and  BYTES  may be *preceded* by the following multiplicative
    suffixes: ...."

    ??

    Tell them :-)

    Oh!! Dear!! So you mean I may have been doing it wrong all these years!!
    --
    Daniel

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ken Blake@21:1/5 to robin_listas@es.invalid on Mon Jan 30 11:10:01 2023
    XPost: alt.windows7.general, alt.os.linux

    On Sun, 29 Jan 2023 18:43:57 +0100, "Carlos E.R."
    <robin_listas@es.invalid> wrote:

    On 2023-01-29 15:27, Ken Blake wrote:
    On Sat, 28 Jan 2023 22:51:28 +0100, "Carlos E.R." <robin_listas@es.invalid> wrote:

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB.


    I don't agree. It may be clear to you and to me, and to others here,
    but most people have never heard of GiB or MiB.

    Then learn.

    The computer industry have done it wrong for decades (looking at
    Microsoft and others).


    Many English words, phrases, and abbreviations have been used
    incorrectly for much longer than decades. But after a while their new
    usage gets established, and nearly everyone uses it. What was once
    wrong doesn't remain wrong forever. That's the nature of language; it
    changes.

    So as far as I'm concerned, despite the existing standards, KB mean
    1024, MB means 1024 x 1024, GB means 1024 x 1024 x 1024, etc. and KiB,
    MiB, Gib, etc. are almost never used and shouldn't be. A disk drive
    that's called 2GB should have 2,147,483,648 bytes, not 2,000,000,000.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to Carlos E.R. on Mon Jan 30 18:36:32 2023
    XPost: alt.windows7.general, alt.os.linux

    "Carlos E.R." <robin_listas@es.invalid> writes:
    On 2023-01-28 21:30, Ravi Kapoor wrote:
    Have the flash industry managed to sort out the size of 1GB yet? I
    bought a 256GB flash drive sometime ago but it had only 231GB. People
    told me that it is because nobody uses correct measurement yet. Is 1GB =
    1000MB or is it 1024MB. I have not been able to reconcile how 256GB is
    only 231GB when reformatted as NTFS or exFAT (default). I lost 25GB
    unnecessarily.

    There is no confusion at all, except for you. You have to study and
    learn the different units.

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB.

    <https://en.wikipedia.org/wiki/Byte#Multiple-byte_units>

    <https://en.wikipedia.org/wiki/Gigabyte#Consumer_confusion>

    The storage industry, yes. But RAM is still routinely sold with
    1GB=2^30bytes, leading to the rather bizarre situation where a laptop
    spec might claim, for example, 16GB RAM and 512GB SSD, with different
    meanings for GB in each case.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ken Blake@21:1/5 to invalid@invalid.invalid on Mon Jan 30 12:18:16 2023
    XPost: alt.windows7.general, alt.os.linux

    On Mon, 30 Jan 2023 18:36:32 +0000, Richard Kettlewell <invalid@invalid.invalid> wrote:

    "Carlos E.R." <robin_listas@es.invalid> writes:
    On 2023-01-28 21:30, Ravi Kapoor wrote:
    Have the flash industry managed to sort out the size of 1GB yet? I
    bought a 256GB flash drive sometime ago but it had only 231GB. People
    told me that it is because nobody uses correct measurement yet. Is 1GB = >>> 1000MB or is it 1024MB. I have not been able to reconcile how 256GB is
    only 231GB when reformatted as NTFS or exFAT (default). I lost 25GB
    unnecessarily.

    There is no confusion at all, except for you. You have to study and
    learn the different units.

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB.

    <https://en.wikipedia.org/wiki/Byte#Multiple-byte_units>

    <https://en.wikipedia.org/wiki/Gigabyte#Consumer_confusion>

    The storage industry, yes. But RAM is still routinely sold with >1GB=2^30bytes, leading to the rather bizarre situation where a laptop
    spec might claim, for example, 16GB RAM and 512GB SSD, with different >meanings for GB in each case.


    Yes. I never thought of that before, but it is bizarre.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E.R.@21:1/5 to Ken Blake on Tue Jan 31 11:09:18 2023
    XPost: alt.windows7.general, alt.os.linux

    On 2023-01-30 19:10, Ken Blake wrote:
    On Sun, 29 Jan 2023 18:43:57 +0100, "Carlos E.R."
    <robin_listas@es.invalid> wrote:

    On 2023-01-29 15:27, Ken Blake wrote:
    On Sat, 28 Jan 2023 22:51:28 +0100, "Carlos E.R." <robin_listas@es.invalid> wrote:

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB.


    I don't agree. It may be clear to you and to me, and to others here,
    but most people have never heard of GiB or MiB.

    Then learn.

    The computer industry have done it wrong for decades (looking at
    Microsoft and others).


    Many English words, phrases, and abbreviations have been used
    incorrectly for much longer than decades. But after a while their new
    usage gets established, and nearly everyone uses it. What was once
    wrong doesn't remain wrong forever. That's the nature of language; it changes.

    So as far as I'm concerned, despite the existing standards, KB mean
    1024, MB means 1024 x 1024, GB means 1024 x 1024 x 1024, etc. and KiB,
    MiB, Gib, etc. are almost never used and shouldn't be. A disk drive
    that's called 2GB should have 2,147,483,648 bytes, not 2,000,000,000.

    Well, that has decreed wrong.

    --
    Cheers, Carlos.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Daniel65@21:1/5 to Ken Blake on Tue Jan 31 21:27:12 2023
    XPost: alt.windows7.general, alt.os.linux

    Ken Blake wrote on 31/1/23 5:10 am:
    On Sun, 29 Jan 2023 18:43:57 +0100, "Carlos E.R."
    <robin_listas@es.invalid> wrote:
    On 2023-01-29 15:27, Ken Blake wrote:
    On Sat, 28 Jan 2023 22:51:28 +0100, "Carlos E.R." <robin_listas@es.invalid> wrote:

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB.

    I don't agree. It may be clear to you and to me, and to others here,
    but most people have never heard of GiB or MiB.

    Then learn.

    The computer industry have done it wrong for decades (looking at
    Microsoft and others).

    Many English words, phrases, and abbreviations have been used
    incorrectly for much longer than decades. But after a while their new
    usage gets established, and nearly everyone uses it. What was once
    wrong doesn't remain wrong forever. That's the nature of language; it changes.

    Back in the day, in the English World didn't "Billion" mean One Million
    times One Million ..... whereas in the U.S. of A. World "Billion" means
    One Thousand times One Million??
    --
    Daniel

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Pancho@21:1/5 to Ken Blake on Tue Jan 31 10:58:15 2023
    XPost: alt.windows7.general, alt.os.linux

    On 30/01/2023 18:10, Ken Blake wrote:
    On Sun, 29 Jan 2023 18:43:57 +0100, "Carlos E.R."
    <robin_listas@es.invalid> wrote:

    On 2023-01-29 15:27, Ken Blake wrote:
    On Sat, 28 Jan 2023 22:51:28 +0100, "Carlos E.R." <robin_listas@es.invalid> wrote:

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB.


    I don't agree. It may be clear to you and to me, and to others here,
    but most people have never heard of GiB or MiB.

    Then learn.

    The computer industry have done it wrong for decades (looking at
    Microsoft and others).


    Many English words, phrases, and abbreviations have been used
    incorrectly for much longer than decades. But after a while their new
    usage gets established, and nearly everyone uses it. What was once
    wrong doesn't remain wrong forever. That's the nature of language; it changes.

    So as far as I'm concerned, despite the existing standards, KB mean
    1024, MB means 1024 x 1024, GB means 1024 x 1024 x 1024, etc. and KiB,
    MiB, Gib, etc. are almost never used and shouldn't be. A disk drive
    that's called 2GB should have 2,147,483,648 bytes, not 2,000,000,000.


    But...

    We use decimal for most other stuff, why would we want to use binary for
    this special case? K means 10^3 not 2^10, M means 10^6, G means 10^9.
    Why introduce complexity, unnecessary special cases?

    What advantage do you think 2^10, 2^20 offers?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E. R.@21:1/5 to Pancho on Tue Jan 31 15:09:55 2023
    XPost: alt.windows7.general, alt.os.linux

    On 2023-01-31 11:58, Pancho wrote:
    On 30/01/2023 18:10, Ken Blake wrote:
    On Sun, 29 Jan 2023 18:43:57 +0100, "Carlos E.R."
    <robin_listas@es.invalid> wrote:

    On 2023-01-29 15:27, Ken Blake wrote:
    On Sat, 28 Jan 2023 22:51:28 +0100, "Carlos E.R."
    <robin_listas@es.invalid> wrote:

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB. >>>>

    I don't agree. It may be clear to you and to me, and to others here,
    but most people have never heard of GiB or MiB.

    Then learn.

    The computer industry have done it wrong for decades (looking at
    Microsoft and others).


    Many English words, phrases, and abbreviations have been used
    incorrectly for much longer than decades. But after a while their new
    usage gets established, and nearly everyone uses it. What was once
    wrong doesn't remain wrong forever. That's the nature of language; it
    changes.

    So as far as I'm concerned, despite the existing standards, KB mean
    1024, MB means 1024 x 1024, GB means 1024 x 1024 x 1024, etc. and KiB,
    MiB, Gib, etc. are almost never used and shouldn't be.  A disk drive
    that's called 2GB should have 2,147,483,648 bytes, not 2,000,000,000.


    But...

    We use decimal for most other stuff, why would we want to use binary for
    this special case? K means 10^3 not 2^10, M means 10^6, G means 10^9.
    Why introduce complexity, unnecessary special cases?

    What advantage do you think 2^10, 2^20 offers?

    That memory has to be built in multiples of 2. It is easier to say "1K
    of RAM" than "1024 bytes", and the difference is small, thus ignored.
    Problem is, we now use gigas or teras, and the difference is notable.

    As computers are recent and the unit prefixes like K, M, G... are older,
    it is computerese which has to adapt and use new and different prefixes,
    not usurp the old prefixes introducing confusion.

    --
    Cheers,
    Carlos E.R.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ken Blake@21:1/5 to robin_listas@es.invalid on Tue Jan 31 08:05:21 2023
    XPost: alt.windows7.general, alt.os.linux

    On Tue, 31 Jan 2023 11:09:18 +0100, "Carlos E.R."
    <robin_listas@es.invalid> wrote:

    On 2023-01-30 19:10, Ken Blake wrote:
    On Sun, 29 Jan 2023 18:43:57 +0100, "Carlos E.R."
    <robin_listas@es.invalid> wrote:

    On 2023-01-29 15:27, Ken Blake wrote:
    On Sat, 28 Jan 2023 22:51:28 +0100, "Carlos E.R." <robin_listas@es.invalid> wrote:

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB. >>>>

    I don't agree. It may be clear to you and to me, and to others here,
    but most people have never heard of GiB or MiB.

    Then learn.

    The computer industry have done it wrong for decades (looking at
    Microsoft and others).


    Many English words, phrases, and abbreviations have been used
    incorrectly for much longer than decades. But after a while their new
    usage gets established, and nearly everyone uses it. What was once
    wrong doesn't remain wrong forever. That's the nature of language; it
    changes.

    So as far as I'm concerned, despite the existing standards, KB mean
    1024, MB means 1024 x 1024, GB means 1024 x 1024 x 1024, etc. and KiB,
    MiB, Gib, etc. are almost never used and shouldn't be. A disk drive
    that's called 2GB should have 2,147,483,648 bytes, not 2,000,000,000.

    Well, that has decreed wrong.


    Yes, I know. But as I said, that's a decree I disagree with.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to Pancho on Tue Jan 31 14:40:24 2023
    XPost: alt.windows7.general, alt.os.linux

    Pancho <Pancho.Jones@proton.me> writes:
    On 30/01/2023 18:10, Ken Blake wrote:
    So as far as I'm concerned, despite the existing standards, KB mean
    1024, MB means 1024 x 1024, GB means 1024 x 1024 x 1024, etc. and KiB,
    MiB, Gib, etc. are almost never used and shouldn't be. A disk drive
    that's called 2GB should have 2,147,483,648 bytes, not 2,000,000,000.

    But...

    We use decimal for most other stuff, why would we want to use binary
    for this special case? K means 10^3 not 2^10, M means 10^6, G means
    10^9. Why introduce complexity, unnecessary special cases?

    Well, it’s hardly ‘introduce’ any more, the convention is decades old.

    What advantage do you think 2^10, 2^20 offers?

    Being able to talking about 16GB RAM (or 16GiB if you really must)
    instead of 17.179869184GB RAM.

    Persistent storage is less likely to be a power of 2 (even if the
    underlying medium is 2^n bytes, there’s generally space reserved for metadata, wear-levelling, error management, firmware, etc) but it’s
    still generally divisible by at least 2^12 and often a larger power of
    2. It’s usually not divisible by a nontrivial power of 10, so the
    capacities in decimal units are usually not very precise.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E. R.@21:1/5 to Ken Blake on Tue Jan 31 16:54:57 2023
    XPost: alt.windows7.general, alt.os.linux

    On 2023-01-31 16:32, Ken Blake wrote:
    On Tue, 31 Jan 2023 10:58:15 +0000, Pancho <Pancho.Jones@proton.me>
    wrote:


    My point, once again, is that when drive manufacturers use the
    established standard for disk drives when almost the rest of the
    computer world does it differently, it confuses people and is a bad
    thing to do. In my view, this is a case where consistency is more
    important than standards.

    Mine is that those computer people are doing it wrong, and the rest of
    the world is right.

    Computer people have to adapt and say 1 GiB instead og 1 GB. Hard disk
    people are doing it right since decades.


    Microsoft, typically, hates standards and goes against.


    --
    Cheers,
    Carlos E.R.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ken Blake@21:1/5 to All on Tue Jan 31 08:32:27 2023
    XPost: alt.windows7.general, alt.os.linux

    On Tue, 31 Jan 2023 10:58:15 +0000, Pancho <Pancho.Jones@proton.me>
    wrote:

    On 30/01/2023 18:10, Ken Blake wrote:
    On Sun, 29 Jan 2023 18:43:57 +0100, "Carlos E.R."
    <robin_listas@es.invalid> wrote:

    On 2023-01-29 15:27, Ken Blake wrote:
    On Sat, 28 Jan 2023 22:51:28 +0100, "Carlos E.R." <robin_listas@es.invalid> wrote:

    The industry has very clear that 1 GiB = 1024 MiB, and 1 GB = 1000 MB. >>>>

    I don't agree. It may be clear to you and to me, and to others here,
    but most people have never heard of GiB or MiB.

    Then learn.

    The computer industry have done it wrong for decades (looking at
    Microsoft and others).


    Many English words, phrases, and abbreviations have been used
    incorrectly for much longer than decades. But after a while their new
    usage gets established, and nearly everyone uses it. What was once
    wrong doesn't remain wrong forever. That's the nature of language; it
    changes.

    So as far as I'm concerned, despite the existing standards, KB mean
    1024, MB means 1024 x 1024, GB means 1024 x 1024 x 1024, etc. and KiB,
    MiB, Gib, etc. are almost never used and shouldn't be. A disk drive
    that's called 2GB should have 2,147,483,648 bytes, not 2,000,000,000.


    But...

    We use decimal for most other stuff,

    Most? Yes, but not for computers.


    why would we want to use binary for
    this special case?

    Want to? It's not a matter of wanting to. We should accept it because
    that's the way it's done with computers, except by drive
    manufacturers.

    K means 10^3 not 2^10, M means 10^6, G means 10^9.

    Except for computers, yes.

    Why introduce complexity, unnecessary special cases?

    I don't want to introduce complexity, or have unnecessary special
    cases. My point was that it's *already* been introduced and
    established *in the computer world*. Going against what is established
    is exactly what introduces unnecessary special cases, and confuses
    people.

    The number of people who buy a 1TB hard drive and are confused
    because Windows tells them it's only around 900GB is enormous; look in
    the Windows newsgroups and online forums and see how many people ask
    "what happened to the other 100GB?"

    Technically, you are right, of course. Using the established standard,
    it's 1,000,000,000,000 bytes--1TB--and that's what drive manufacturers
    call it. But according to Windows and most of the rest of the computer
    word, 1TB is 1,099,511,637,776 bytes, so their "1TB" is only around
    900 GB.

    My point, once again, is that when drive manufacturers use the
    established standard for disk drives when almost the rest of the
    computer world does it differently, it confuses people and is a bad
    thing to do. In my view, this is a case where consistency is more
    important than standards.

    What advantage do you think 2^10, 2^20 offers?


    That question is irrelevant to my point. I'm talking only about the
    computer world.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ken Blake@21:1/5 to robin_listas@es.invalid on Tue Jan 31 11:21:12 2023
    XPost: alt.windows7.general, alt.os.linux

    On Tue, 31 Jan 2023 16:54:57 +0100, "Carlos E. R."
    <robin_listas@es.invalid> wrote:

    On 2023-01-31 16:32, Ken Blake wrote:
    On Tue, 31 Jan 2023 10:58:15 +0000, Pancho <Pancho.Jones@proton.me>
    wrote:


    My point, once again, is that when drive manufacturers use the
    established standard for disk drives when almost the rest of the
    computer world does it differently, it confuses people and is a bad
    thing to do. In my view, this is a case where consistency is more
    important than standards.

    Mine is that those computer people are doing it wrong, and the rest of
    the world is right.

    Going by the standards, you are of course correct.

    But it doesn't matter. What matters is what considered correct by most
    people.

    Computer people have to adapt and say 1 GiB instead og 1 GB.

    "Have to"? Not a chance. It will never happen. There's only one way to
    get consistency and that's for the drive manufacturers to use the
    common understanding of KB, MB, GB etc. that the rest of the computer
    world uses.

    You will have course say that the drive manufacturers shouldn't change
    and the rest of the computer world should. That might be a good choice
    if it were possible, but it's not. There are only a handful of drive manufacturers, but *millions* of computer users. You're not going to
    change those millions.

    Hard disk
    people are doing it right since decades.


    Technically, yes. Practically, no.

    Microsoft, typically, hates standards and goes against.

    We agree on that. Is Microsoft responsible for the common meanings of
    MB, GB, TB, etc. being different from the standards and being used the
    way they are? Probably.

    But it doesn't matter who is responsible. Whether you or I like it or
    not (I also don't like it, but I have no real choice other than to
    accept it), that's the way it is, and we are not going to change it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to Pancho on Tue Jan 31 14:36:33 2023
    XPost: alt.windows7.general, alt.os.linux

    On 1/31/2023 5:58 AM, Pancho wrote:


    But...

    We use decimal for most other stuff, why would we want to use binary
    for this special case? K means 10^3 not 2^10, M means 10^6, G means 10^9.
    Why introduce complexity, unnecessary special cases?

    What advantage do you think 2^10, 2^20 offers?

    Decoding logic is simpler when you use powers-of-two.

    This was important... a long time ago.

    In this example, someone uses a '139 to decode an address and select a device with it.

    https://blog.idorobots.org/media/upnod3/ram.png

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E.R.@21:1/5 to Ken Blake on Tue Jan 31 21:44:02 2023
    XPost: alt.windows7.general, alt.os.linux

    On 2023-01-31 19:21, Ken Blake wrote:
    On Tue, 31 Jan 2023 16:54:57 +0100, "Carlos E. R."
    <robin_listas@es.invalid> wrote:

    On 2023-01-31 16:32, Ken Blake wrote:
    On Tue, 31 Jan 2023 10:58:15 +0000, Pancho <Pancho.Jones@proton.me>
    wrote:


    My point, once again, is that when drive manufacturers use the
    established standard for disk drives when almost the rest of the
    computer world does it differently, it confuses people and is a bad
    thing to do. In my view, this is a case where consistency is more
    important than standards.

    Mine is that those computer people are doing it wrong, and the rest of
    the world is right.

    Going by the standards, you are of course correct.

    But it doesn't matter. What matters is what considered correct by most people.

    Computer people have to adapt and say 1 GiB instead og 1 GB.

    "Have to"? Not a chance. It will never happen. There's only one way to
    get consistency and that's for the drive manufacturers to use the
    common understanding of KB, MB, GB etc. that the rest of the computer
    world uses.

    Not going to happen :-)


    You will have course say that the drive manufacturers shouldn't change
    and the rest of the computer world should. That might be a good choice
    if it were possible, but it's not. There are only a handful of drive manufacturers, but *millions* of computer users. You're not going to
    change those millions.

    Give it time, and teach units in schools.


    Hard disk
    people are doing it right since decades.


    Technically, yes. Practically, no.

    Microsoft, typically, hates standards and goes against.

    We agree on that. Is Microsoft responsible for the common meanings of
    MB, GB, TB, etc. being different from the standards and being used the
    way they are? Probably.

    But it doesn't matter who is responsible. Whether you or I like it or
    not (I also don't like it, but I have no real choice other than to
    accept it), that's the way it is, and we are not going to change it.

    --
    Cheers, Carlos.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Pancho@21:1/5 to Paul on Tue Jan 31 22:12:32 2023
    XPost: alt.windows7.general, alt.os.linux

    On 1/31/23 19:36, Paul wrote:
    On 1/31/2023 5:58 AM, Pancho wrote:


    But...

    We use decimal for most other stuff, why would we want to use binary
    for this special case? K means 10^3 not 2^10, M means 10^6, G means 10^9.
    Why introduce complexity, unnecessary special cases?

    What advantage do you think 2^10, 2^20 offers?

    Decoding logic is simpler when you use powers-of-two.

    This was important... a long time ago.

    In this example, someone uses a '139 to decode an address and select a
    device with it.

    https://blog.idorobots.org/media/upnod3/ram.png

       Paul

    Using Hex for memory addressing is sensible, but that is different.
    Forty years ago I did that stuff, but not recently. I use high level
    languages, and the kids I worked with understood far less than I did,
    about that type of stuff, and didn't care.

    Just as I don't know how to double declutch a car.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Pancho@21:1/5 to Richard Kettlewell on Tue Jan 31 22:12:13 2023
    XPost: alt.windows7.general, alt.os.linux

    On 1/31/23 14:40, Richard Kettlewell wrote:
    Pancho <Pancho.Jones@proton.me> writes:
    On 30/01/2023 18:10, Ken Blake wrote:
    So as far as I'm concerned, despite the existing standards, KB mean
    1024, MB means 1024 x 1024, GB means 1024 x 1024 x 1024, etc. and KiB,
    MiB, Gib, etc. are almost never used and shouldn't be. A disk drive
    that's called 2GB should have 2,147,483,648 bytes, not 2,000,000,000.

    But...

    We use decimal for most other stuff, why would we want to use binary
    for this special case? K means 10^3 not 2^10, M means 10^6, G means
    10^9. Why introduce complexity, unnecessary special cases?

    Well, it’s hardly ‘introduce’ any more, the convention is decades old.

    What advantage do you think 2^10, 2^20 offers?

    Being able to talking about 16GB RAM (or 16GiB if you really must)
    instead of 17.179869184GB RAM.


    RAM is never 16 GiB either, areas will be reserved by the OS. For
    instance, 4 GiB on Windows XP 32 only had about 3.1 GiB available for my
    use.

    Floating point numbers cannot represent rational numbers like 1/3 precisely/concisely. We live with them for simplicity.

    My point was that when I do a calculation for how much RAM I need, I use decimal, ~17.2 * 10^9 B is the number I want to know, not the prettier
    16 GiB.

    If you support both systems, you encourage confusion.

    Other than that, I agree with what Carlos is saying.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to Pancho on Wed Feb 1 08:35:31 2023
    XPost: alt.windows7.general, alt.os.linux

    Pancho <Pancho.Jones@proton.me> writes:

    On 1/31/23 14:40, Richard Kettlewell wrote:
    Pancho <Pancho.Jones@proton.me> writes:
    On 30/01/2023 18:10, Ken Blake wrote:
    So as far as I'm concerned, despite the existing standards, KB mean
    1024, MB means 1024 x 1024, GB means 1024 x 1024 x 1024, etc. and KiB, >>>> MiB, Gib, etc. are almost never used and shouldn't be. A disk drive
    that's called 2GB should have 2,147,483,648 bytes, not 2,000,000,000.

    But...

    We use decimal for most other stuff, why would we want to use binary
    for this special case? K means 10^3 not 2^10, M means 10^6, G means
    10^9. Why introduce complexity, unnecessary special cases?
    Well, it’s hardly ‘introduce’ any more, the convention is decades
    old.

    What advantage do you think 2^10, 2^20 offers?
    Being able to talking about 16GB RAM (or 16GiB if you really must)
    instead of 17.179869184GB RAM.

    RAM is never 16 GiB either, areas will be reserved by the OS. For
    instance, 4 GiB on Windows XP 32 only had about 3.1 GiB available for
    my use.

    Why did you bother to ask the question when you’re not going to accept
    the answer?

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Pancho@21:1/5 to Richard Kettlewell on Wed Feb 1 10:43:45 2023
    XPost: alt.windows7.general, alt.os.linux

    On 01/02/2023 08:35, Richard Kettlewell wrote:
    Pancho <Pancho.Jones@proton.me> writes:

    On 1/31/23 14:40, Richard Kettlewell wrote:
    Pancho <Pancho.Jones@proton.me> writes:
    On 30/01/2023 18:10, Ken Blake wrote:
    So as far as I'm concerned, despite the existing standards, KB mean
    1024, MB means 1024 x 1024, GB means 1024 x 1024 x 1024, etc. and KiB, >>>>> MiB, Gib, etc. are almost never used and shouldn't be. A disk drive >>>>> that's called 2GB should have 2,147,483,648 bytes, not 2,000,000,000. >>>>
    But...

    We use decimal for most other stuff, why would we want to use binary
    for this special case? K means 10^3 not 2^10, M means 10^6, G means
    10^9. Why introduce complexity, unnecessary special cases?
    Well, it’s hardly ‘introduce’ any more, the convention is decades
    old.

    What advantage do you think 2^10, 2^20 offers?
    Being able to talking about 16GB RAM (or 16GiB if you really must)
    instead of 17.179869184GB RAM.

    RAM is never 16 GiB either, areas will be reserved by the OS. For
    instance, 4 GiB on Windows XP 32 only had about 3.1 GiB available for
    my use.

    Why did you bother to ask the question when you’re not going to accept
    the answer?


    Seriously? "The answer".

    When I see something I consider bad practice, I always ask why? What
    benefit does it offer?

    Quite often I have missed something important, other times I haven't.

    Don't you do that? How else do we understand stuff?

    Anyway, I wasn't even rejecting your answer. I just didn't feel the
    concise notation merited the complexity of a special case.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to Pancho on Wed Feb 1 13:36:19 2023
    XPost: alt.windows7.general, alt.os.linux

    Pancho <Pancho.Jones@proton.me> writes:
    On 01/02/2023 08:35, Richard Kettlewell wrote:
    Pancho <Pancho.Jones@proton.me> writes:
    On 1/31/23 14:40, Richard Kettlewell wrote:
    Pancho <Pancho.Jones@proton.me> writes:
    What advantage do you think 2^10, 2^20 offers?
    Being able to talking about 16GB RAM (or 16GiB if you really must)
    instead of 17.179869184GB RAM.

    RAM is never 16 GiB either, areas will be reserved by the OS. For
    instance, 4 GiB on Windows XP 32 only had about 3.1 GiB available for
    my use.
    Why did you bother to ask the question when you’re not going to
    accept
    the answer?

    Seriously? "The answer".

    When I see something I consider bad practice, I always ask why? What
    benefit does it offer?

    Quite often I have missed something important, other times I haven't.

    Don't you do that? How else do we understand stuff?

    Anyway, I wasn't even rejecting your answer. I just didn't feel the
    concise notation merited the complexity of a special case.

    You asked why, I told you why, you argued with it, and are still doing
    so.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ken Blake@21:1/5 to All on Wed Feb 1 08:25:22 2023
    XPost: alt.windows7.general, alt.os.linux

    On Tue, 31 Jan 2023 22:12:13 +0000, Pancho <Pancho.Jones@proton.me>
    wrote:

    On 1/31/23 14:40, Richard Kettlewell wrote:
    Pancho <Pancho.Jones@proton.me> writes:
    On 30/01/2023 18:10, Ken Blake wrote:
    So as far as I'm concerned, despite the existing standards, KB mean
    1024, MB means 1024 x 1024, GB means 1024 x 1024 x 1024, etc. and KiB, >>>> MiB, Gib, etc. are almost never used and shouldn't be. A disk drive
    that's called 2GB should have 2,147,483,648 bytes, not 2,000,000,000.

    But...

    We use decimal for most other stuff, why would we want to use binary
    for this special case? K means 10^3 not 2^10, M means 10^6, G means
    10^9. Why introduce complexity, unnecessary special cases?

    Well, its hardly introduce any more, the convention is decades old.

    What advantage do you think 2^10, 2^20 offers?

    Being able to talking about 16GB RAM (or 16GiB if you really must)
    instead of 17.179869184GB RAM.


    RAM is never 16 GiB either,

    Yes it is.

    areas will be reserved by the OS. For
    instance, 4 GiB on Windows XP 32 only had about 3.1 GiB available for my >use.


    Yes, but how much RAM there is and how much is available for your use
    are two different things. The amount of RAM installed on your computer
    is 4GB, not 3.1GB.

    For example If I go to System>About, here under Windows 11, it says
    Installed RAM 32.0 GB (31.8 usable). How much is installed and how
    much is usable are two different things.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ken Blake@21:1/5 to robin_listas@es.invalid on Wed Feb 1 08:28:02 2023
    XPost: alt.windows7.general, alt.os.linux

    On Tue, 31 Jan 2023 21:44:02 +0100, "Carlos E.R."
    <robin_listas@es.invalid> wrote:

    On 2023-01-31 19:21, Ken Blake wrote:
    On Tue, 31 Jan 2023 16:54:57 +0100, "Carlos E. R."
    <robin_listas@es.invalid> wrote:

    On 2023-01-31 16:32, Ken Blake wrote:
    On Tue, 31 Jan 2023 10:58:15 +0000, Pancho <Pancho.Jones@proton.me>
    wrote:


    My point, once again, is that when drive manufacturers use the
    established standard for disk drives when almost the rest of the
    computer world does it differently, it confuses people and is a bad
    thing to do. In my view, this is a case where consistency is more
    important than standards.

    Mine is that those computer people are doing it wrong, and the rest of
    the world is right.

    Going by the standards, you are of course correct.

    But it doesn't matter. What matters is what considered correct by most
    people.

    Computer people have to adapt and say 1 GiB instead og 1 GB.

    "Have to"? Not a chance. It will never happen. There's only one way to
    get consistency and that's for the drive manufacturers to use the
    common understanding of KB, MB, GB etc. that the rest of the computer
    world uses.

    Not going to happen :-)

    We agree on that.


    You will have course say that the drive manufacturers shouldn't change
    and the rest of the computer world should. That might be a good choice
    if it were possible, but it's not. There are only a handful of drive
    manufacturers, but *millions* of computer users. You're not going to
    change those millions.

    Give it time, and teach units in schools.

    Not going to happen :-)


    Hard disk
    people are doing it right since decades.


    Technically, yes. Practically, no.

    Microsoft, typically, hates standards and goes against.

    We agree on that. Is Microsoft responsible for the common meanings of
    MB, GB, TB, etc. being different from the standards and being used the
    way they are? Probably.

    But it doesn't matter who is responsible. Whether you or I like it or
    not (I also don't like it, but I have no real choice other than to
    accept it), that's the way it is, and we are not going to change it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E.R.@21:1/5 to Ken Blake on Wed Feb 1 22:36:18 2023
    XPost: alt.windows7.general, alt.os.linux

    On 2023-02-01 16:28, Ken Blake wrote:
    On Tue, 31 Jan 2023 21:44:02 +0100, "Carlos E.R."
    <robin_listas@es.invalid> wrote:

    On 2023-01-31 19:21, Ken Blake wrote:
    On Tue, 31 Jan 2023 16:54:57 +0100, "Carlos E. R."
    <robin_listas@es.invalid> wrote:

    On 2023-01-31 16:32, Ken Blake wrote:
    On Tue, 31 Jan 2023 10:58:15 +0000, Pancho <Pancho.Jones@proton.me>
    wrote:


    My point, once again, is that when drive manufacturers use the
    established standard for disk drives when almost the rest of the
    computer world does it differently, it confuses people and is a bad
    thing to do. In my view, this is a case where consistency is more
    important than standards.

    Mine is that those computer people are doing it wrong, and the rest of >>>> the world is right.

    Going by the standards, you are of course correct.

    But it doesn't matter. What matters is what considered correct by most
    people.

    Computer people have to adapt and say 1 GiB instead og 1 GB.

    "Have to"? Not a chance. It will never happen. There's only one way to
    get consistency and that's for the drive manufacturers to use the
    common understanding of KB, MB, GB etc. that the rest of the computer
    world uses.

    Not going to happen :-)

    We agree on that.


    You will have course say that the drive manufacturers shouldn't change
    and the rest of the computer world should. That might be a good choice
    if it were possible, but it's not. There are only a handful of drive
    manufacturers, but *millions* of computer users. You're not going to
    change those millions.

    Give it time, and teach units in schools.

    Not going to happen :-)

    :-)

    --
    Cheers, Carlos.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ken Blake@21:1/5 to All on Wed Feb 1 15:41:33 2023
    XPost: alt.windows7.general, alt.os.linux

    On Wed, 1 Feb 2023 22:21:22 +0000, Pancho <Pancho.Jones@proton.me>
    wrote:

    On 2/1/23 15:25, Ken Blake wrote:
    On Tue, 31 Jan 2023 22:12:13 +0000, Pancho <Pancho.Jones@proton.me>
    wrote:

    On 1/31/23 14:40, Richard Kettlewell wrote:
    Pancho <Pancho.Jones@proton.me> writes:
    On 30/01/2023 18:10, Ken Blake wrote:
    So as far as I'm concerned, despite the existing standards, KB mean >>>>>> 1024, MB means 1024 x 1024, GB means 1024 x 1024 x 1024, etc. and KiB, >>>>>> MiB, Gib, etc. are almost never used and shouldn't be. A disk drive >>>>>> that's called 2GB should have 2,147,483,648 bytes, not 2,000,000,000. >>>>>
    But...

    We use decimal for most other stuff, why would we want to use binary >>>>> for this special case? K means 10^3 not 2^10, M means 10^6, G means
    10^9. Why introduce complexity, unnecessary special cases?

    Well, its hardly introduce any more, the convention is decades old. >>>>
    What advantage do you think 2^10, 2^20 offers?

    Being able to talking about 16GB RAM (or 16GiB if you really must)
    instead of 17.179869184GB RAM.


    RAM is never 16 GiB either,

    Yes it is.

    areas will be reserved by the OS. For
    instance, 4 GiB on Windows XP 32 only had about 3.1 GiB available for my >>> use.


    Yes, but how much RAM there is and how much is available for your use
    are two different things. The amount of RAM installed on your computer
    is 4GB, not 3.1GB.


    I care about what is usable to me, just like with disk storage.

    As do I. But what either of us cares about has nothing to do with how
    much RAM 4GB is.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Pancho@21:1/5 to Ken Blake on Wed Feb 1 22:21:22 2023
    XPost: alt.windows7.general, alt.os.linux

    On 2/1/23 15:25, Ken Blake wrote:
    On Tue, 31 Jan 2023 22:12:13 +0000, Pancho <Pancho.Jones@proton.me>
    wrote:

    On 1/31/23 14:40, Richard Kettlewell wrote:
    Pancho <Pancho.Jones@proton.me> writes:
    On 30/01/2023 18:10, Ken Blake wrote:
    So as far as I'm concerned, despite the existing standards, KB mean
    1024, MB means 1024 x 1024, GB means 1024 x 1024 x 1024, etc. and KiB, >>>>> MiB, Gib, etc. are almost never used and shouldn't be. A disk drive >>>>> that's called 2GB should have 2,147,483,648 bytes, not 2,000,000,000. >>>>
    But...

    We use decimal for most other stuff, why would we want to use binary
    for this special case? K means 10^3 not 2^10, M means 10^6, G means
    10^9. Why introduce complexity, unnecessary special cases?

    Well, it’s hardly ‘introduce’ any more, the convention is decades old.

    What advantage do you think 2^10, 2^20 offers?

    Being able to talking about 16GB RAM (or 16GiB if you really must)
    instead of 17.179869184GB RAM.


    RAM is never 16 GiB either,

    Yes it is.

    areas will be reserved by the OS. For
    instance, 4 GiB on Windows XP 32 only had about 3.1 GiB available for my
    use.


    Yes, but how much RAM there is and how much is available for your use
    are two different things. The amount of RAM installed on your computer
    is 4GB, not 3.1GB.


    I care about what is usable to me, just like with disk storage.

    For example If I go to System>About, here under Windows 11, it says
    Installed RAM 32.0 GB (31.8 usable). How much is installed and how
    much is usable are two different things.


    For me, it says 16 GB and 15.8 GB. I don't know if that 15.8 GB is
    exactly 15.8 GiB or only accurate to 1 dp. Either way, the prettiness of exactly 16 has gone.

    The prettiness is gone, but I'm still left with the problem that if I
    want to calculate the amount of usable RAM my software data structures
    require, in GiB, I have to convert my natural decimal calculations to a
    binary format, to avoid the 7.4% difference between the GiB, and the
    more orthodox decimal GB. Maybe other programmer don't estimate memory requirements, don't use algorithms that require a lot of memory? It
    wouldn't surprise me, innumeracy is surprisingly high in IT.

    Clinging to unnecessary complexity reminds me of the metric martyrs and
    their insistence on using imperial weights and measures. When automating
    some business process, you often see veterans of the industry try to
    cling to unnecessary complexity. I guess if you remove the complexity,
    the competitive advantage they have in understanding it disappears, they
    are diminished.

    As far as I can see, the US government and the standards organizations
    have agreed on the decimal GB.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Daniel65@21:1/5 to Pancho on Thu Feb 2 19:51:34 2023
    XPost: alt.windows7.general, alt.os.linux

    Pancho wrote on 2/2/23 9:21 am:
    On 2/1/23 15:25, Ken Blake wrote:
    On Tue, 31 Jan 2023 22:12:13 +0000, Pancho
    <Pancho.Jones@proton.me> wrote:
    On 1/31/23 14:40, Richard Kettlewell wrote:
    Pancho <Pancho.Jones@proton.me> writes:
    On 30/01/2023 18:10, Ken Blake wrote:
    So as far as I'm concerned, despite the existing standards,
    KB mean 1024, MB means 1024 x 1024, GB means 1024 x 1024 x
    1024, etc. and KiB, MiB, Gib, etc. are almost never used
    and shouldn't be. A disk drive that's called 2GB should
    have 2,147,483,648 bytes, not 2,000,000,000.

    But...

    We use decimal for most other stuff, why would we want to use
    binary for this special case? K means 10^3 not 2^10, M means
    10^6, G means 10^9. Why introduce complexity, unnecessary
    special cases?

    Well, it’s hardly ‘introduce’ any more, the convention is
    decades old.

    What advantage do you think 2^10, 2^20 offers?

    Being able to talking about 16GB RAM (or 16GiB if you really
    must) instead of 17.179869184GB RAM.

    RAM is never 16 GiB either,

    Yes it is.

    areas will be reserved by the OS. For instance, 4 GiB on Windows
    XP 32 only had about 3.1 GiB available for my use.

    Yes, but how much RAM there is and how much is available for your
    use are two different things. The amount of RAM installed on your
    computer is 4GB, not 3.1GB.

    I care about what is usable to me, just like with disk storage.

    How much of that 3.1GB would be useful for you if the other 0.9GB were
    not being used?? ;-P I'm guessing NONE!!
    --
    Daniel

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E.R.@21:1/5 to Pancho on Thu Feb 2 10:37:29 2023
    XPost: alt.windows7.general, alt.os.linux

    On 2023-02-01 23:21, Pancho wrote:
    On 2/1/23 15:25, Ken Blake wrote:
    On Tue, 31 Jan 2023 22:12:13 +0000, Pancho <Pancho.Jones@proton.me>
    wrote:

    On 1/31/23 14:40, Richard Kettlewell wrote:
    Pancho <Pancho.Jones@proton.me> writes:
    On 30/01/2023 18:10, Ken Blake wrote:
    So as far as I'm concerned, despite the existing standards, KB mean >>>>>> 1024, MB means 1024 x 1024, GB means 1024 x 1024 x 1024, etc. and
    KiB,
    MiB, Gib, etc. are almost never used and shouldn't be.  A disk drive >>>>>> that's called 2GB should have 2,147,483,648 bytes, not 2,000,000,000. >>>>>
    But...

    We use decimal for most other stuff, why would we want to use binary >>>>> for this special case? K means 10^3 not 2^10, M means 10^6, G means
    10^9. Why introduce complexity, unnecessary special cases?

    Well, it’s hardly ‘introduce’ any more, the convention is decades old.

    What advantage do you think 2^10, 2^20 offers?

    Being able to talking about 16GB RAM (or 16GiB if you really must)
    instead of 17.179869184GB RAM.


    RAM is never 16 GiB either,

    Yes it is.

    areas will be reserved by the OS.  For
    instance, 4 GiB on Windows XP 32 only had about 3.1 GiB available for my >>> use.


    Yes, but how much RAM there is and how much is available for your use
    are two different things. The amount of RAM installed on your computer
    is 4GB, not 3.1GB.


    I care about what is usable to me, just like with disk storage.

    For example If I go to System>About, here under Windows 11, it says
    Installed RAM 32.0 GB (31.8 usable). How much is installed and how
    much is usable are two different things.


    For me, it says 16 GB and 15.8 GB. I don't know if that 15.8 GB is
    exactly 15.8 GiB or only accurate to 1 dp. Either way, the prettiness of exactly 16 has gone.

    The prettiness is gone, but I'm still left with the problem that if I
    want to calculate the amount of usable RAM my software data structures require, in GiB, I have to convert my natural decimal calculations to a binary format, to avoid the 7.4% difference between the GiB, and the
    more orthodox decimal GB. Maybe other programmer don't estimate memory requirements, don't use algorithms that require a lot of memory? It
    wouldn't surprise me, innumeracy is surprisingly high in IT.

    Clinging to unnecessary complexity reminds me of the metric martyrs and
    their insistence on using imperial weights and measures. When automating
    some business process, you often see veterans of the industry try to
    cling to unnecessary complexity. I guess if you remove the complexity,
    the competitive advantage they have in understanding it disappears, they
    are diminished.

    As far as I can see, the US government and the standards organizations
    have agreed on the decimal GB.

    If all software and docs stick to the units as described by the
    standards organizations, there would be no doubts about what that "16 GB
    and 15.8 GB" of yours actually means. If some one writes GB it is
    decimal, or else he writes GiB. No need to second guess.

    --
    Cheers, Carlos.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ken Blake@21:1/5 to robin_listas@es.invalid on Thu Feb 2 08:41:32 2023
    XPost: alt.windows7.general, alt.os.linux

    On Thu, 2 Feb 2023 10:37:29 +0100, "Carlos E.R."
    <robin_listas@es.invalid> wrote:

    On 2023-02-01 23:21, Pancho wrote:
    On 2/1/23 15:25, Ken Blake wrote:
    On Tue, 31 Jan 2023 22:12:13 +0000, Pancho <Pancho.Jones@proton.me>
    wrote:

    On 1/31/23 14:40, Richard Kettlewell wrote:
    Pancho <Pancho.Jones@proton.me> writes:
    On 30/01/2023 18:10, Ken Blake wrote:
    So as far as I'm concerned, despite the existing standards, KB mean >>>>>>> 1024, MB means 1024 x 1024, GB means 1024 x 1024 x 1024, etc. and >>>>>>> KiB,
    MiB, Gib, etc. are almost never used and shouldn't be. A disk drive >>>>>>> that's called 2GB should have 2,147,483,648 bytes, not 2,000,000,000. >>>>>>
    But...

    We use decimal for most other stuff, why would we want to use binary >>>>>> for this special case? K means 10^3 not 2^10, M means 10^6, G means >>>>>> 10^9. Why introduce complexity, unnecessary special cases?

    Well, its hardly introduce any more, the convention is decades old. >>>>>
    What advantage do you think 2^10, 2^20 offers?

    Being able to talking about 16GB RAM (or 16GiB if you really must)
    instead of 17.179869184GB RAM.


    RAM is never 16 GiB either,

    Yes it is.

    areas will be reserved by the OS. For
    instance, 4 GiB on Windows XP 32 only had about 3.1 GiB available for my >>>> use.


    Yes, but how much RAM there is and how much is available for your use
    are two different things. The amount of RAM installed on your computer
    is 4GB, not 3.1GB.


    I care about what is usable to me, just like with disk storage.

    For example If I go to System>About, here under Windows 11, it says
    Installed RAM 32.0 GB (31.8 usable). How much is installed and how
    much is usable are two different things.


    For me, it says 16 GB and 15.8 GB. I don't know if that 15.8 GB is
    exactly 15.8 GiB or only accurate to 1 dp. Either way, the prettiness of
    exactly 16 has gone.

    The prettiness is gone, but I'm still left with the problem that if I
    want to calculate the amount of usable RAM my software data structures
    require, in GiB, I have to convert my natural decimal calculations to a
    binary format, to avoid the 7.4% difference between the GiB, and the
    more orthodox decimal GB. Maybe other programmer don't estimate memory
    requirements, don't use algorithms that require a lot of memory? It
    wouldn't surprise me, innumeracy is surprisingly high in IT.

    Clinging to unnecessary complexity reminds me of the metric martyrs and
    their insistence on using imperial weights and measures. When automating
    some business process, you often see veterans of the industry try to
    cling to unnecessary complexity. I guess if you remove the complexity,
    the competitive advantage they have in understanding it disappears, they
    are diminished.

    As far as I can see, the US government and the standards organizations
    have agreed on the decimal GB.

    If all software and docs stick to the units as described by the
    standards organizations, there would be no doubts about what that "16 GB
    and 15.8 GB" of yours actually means. If some one writes GB it is
    decimal, or else he writes GiB. No need to second guess.


    I completely agree. But alternatively if all software and docs would
    stick to the powers of two definitions of GB, etc. there would be no
    doubt about what KB, MB, GB, TB, etc. meant. What's most important is consistency in the computer world, not what definitions are used.

    I don't think we'll ever get consistency.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E. R.@21:1/5 to Ken Blake on Thu Feb 2 17:12:37 2023
    XPost: alt.windows7.general, alt.os.linux

    On 2023-02-02 16:41, Ken Blake wrote:
    On Thu, 2 Feb 2023 10:37:29 +0100, "Carlos E.R."
    <robin_listas@es.invalid> wrote:

    On 2023-02-01 23:21, Pancho wrote:
    On 2/1/23 15:25, Ken Blake wrote:
    On Tue, 31 Jan 2023 22:12:13 +0000, Pancho <Pancho.Jones@proton.me>
    wrote:

    On 1/31/23 14:40, Richard Kettlewell wrote:
    Pancho <Pancho.Jones@proton.me> writes:
    On 30/01/2023 18:10, Ken Blake wrote:
    So as far as I'm concerned, despite the existing standards, KB mean >>>>>>>> 1024, MB means 1024 x 1024, GB means 1024 x 1024 x 1024, etc. and >>>>>>>> KiB,
    MiB, Gib, etc. are almost never used and shouldn't be.  A disk drive >>>>>>>> that's called 2GB should have 2,147,483,648 bytes, not 2,000,000,000. >>>>>>>
    But...

    We use decimal for most other stuff, why would we want to use binary >>>>>>> for this special case? K means 10^3 not 2^10, M means 10^6, G means >>>>>>> 10^9. Why introduce complexity, unnecessary special cases?

    Well, it’s hardly ‘introduce’ any more, the convention is decades old.

    What advantage do you think 2^10, 2^20 offers?

    Being able to talking about 16GB RAM (or 16GiB if you really must) >>>>>> instead of 17.179869184GB RAM.


    RAM is never 16 GiB either,

    Yes it is.

    areas will be reserved by the OS.  For
    instance, 4 GiB on Windows XP 32 only had about 3.1 GiB available for my >>>>> use.


    Yes, but how much RAM there is and how much is available for your use
    are two different things. The amount of RAM installed on your computer >>>> is 4GB, not 3.1GB.


    I care about what is usable to me, just like with disk storage.

    For example If I go to System>About, here under Windows 11, it says
    Installed RAM 32.0 GB (31.8 usable). How much is installed and how
    much is usable are two different things.


    For me, it says 16 GB and 15.8 GB. I don't know if that 15.8 GB is
    exactly 15.8 GiB or only accurate to 1 dp. Either way, the prettiness of >>> exactly 16 has gone.

    The prettiness is gone, but I'm still left with the problem that if I
    want to calculate the amount of usable RAM my software data structures
    require, in GiB, I have to convert my natural decimal calculations to a
    binary format, to avoid the 7.4% difference between the GiB, and the
    more orthodox decimal GB. Maybe other programmer don't estimate memory
    requirements, don't use algorithms that require a lot of memory? It
    wouldn't surprise me, innumeracy is surprisingly high in IT.

    Clinging to unnecessary complexity reminds me of the metric martyrs and
    their insistence on using imperial weights and measures. When automating >>> some business process, you often see veterans of the industry try to
    cling to unnecessary complexity. I guess if you remove the complexity,
    the competitive advantage they have in understanding it disappears, they >>> are diminished.

    As far as I can see, the US government and the standards organizations
    have agreed on the decimal GB.

    If all software and docs stick to the units as described by the
    standards organizations, there would be no doubts about what that "16 GB
    and 15.8 GB" of yours actually means. If some one writes GB it is
    decimal, or else he writes GiB. No need to second guess.


    I completely agree. But alternatively if all software and docs would
    stick to the powers of two definitions of GB, etc. there would be no
    doubt about what KB, MB, GB, TB, etc. meant. What's most important is consistency in the computer world, not what definitions are used.

    But there would be confusion to the many people for which K is 1000, and
    have to deduce from context if this is 1000 or 1024.

    Like disc manufacturers, always using 10^


    I don't think we'll ever get consistency.

    I don't like bibytes units, I have been all my life doing 2^
    calculations. But now that there is a standardization by the
    organizations that do standards, I decided to accept it in full.like it
    or not.


    --
    Cheers,
    Carlos E.R.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Daniel65@21:1/5 to Carlos E. R. on Fri Feb 3 22:24:01 2023
    XPost: alt.windows7.general, alt.os.linux

    Carlos E. R. wrote on 3/2/23 3:12 am:
    On 2023-02-02 16:41, Ken Blake wrote:

    <Snip>

    I completely agree. But alternatively if all software and docs would
    stick to the powers of two definitions of GB, etc. there would be no
    doubt about what KB, MB, GB, TB, etc. meant. What's most important is
    consistency in the computer world, not what definitions are used.

    But there would be confusion to the many people for which K is 1000, and
    have to deduce from context if this is 1000 or 1024.
    Should we all go on strike until Society accepts that the smaller
    letter, 'k', represents the smaller number, 1000, and the larger letter
    . 'K', represents the larger number, 1024?? Etc., etc.!

    Who's with me?? ;-P
    --
    Daniel

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ken Blake@21:1/5 to daniel47@nomail.afraid.org on Fri Feb 3 07:12:44 2023
    XPost: alt.windows7.general, alt.os.linux

    On Fri, 3 Feb 2023 22:24:01 +1100, Daniel65
    <daniel47@nomail.afraid.org> wrote:

    Carlos E. R. wrote on 3/2/23 3:12 am:
    On 2023-02-02 16:41, Ken Blake wrote:

    <Snip>

    I completely agree. But alternatively if all software and docs would
    stick to the powers of two definitions of GB, etc. there would be no
    doubt about what KB, MB, GB, TB, etc. meant. What's most important is
    consistency in the computer world, not what definitions are used.

    But there would be confusion to the many people for which K is 1000, and
    have to deduce from context if this is 1000 or 1024.
    Should we all go on strike until Society accepts that the smaller
    letter, 'k', represents the smaller number, 1000, and the larger letter
    . 'K', represents the larger number, 1024?? Etc., etc.!

    Who's with me?? ;-P


    Not me. I don't think that's a great idea. It would be too hard to
    recognize the difference and too hard to remember what's what.

    K and k is essentially no different from KB and KiB, just spelled
    differently.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Carlos E. R.@21:1/5 to Ken Blake on Fri Feb 3 15:48:26 2023
    XPost: alt.windows7.general, alt.os.linux

    On 2023-02-03 15:12, Ken Blake wrote:
    On Fri, 3 Feb 2023 22:24:01 +1100, Daniel65
    <daniel47@nomail.afraid.org> wrote:

    Carlos E. R. wrote on 3/2/23 3:12 am:
    On 2023-02-02 16:41, Ken Blake wrote:

    <Snip>

    I completely agree. But alternatively if all software and docs would
    stick to the powers of two definitions of GB, etc. there would be no
    doubt about what KB, MB, GB, TB, etc. meant. What's most important is
    consistency in the computer world, not what definitions are used.

    But there would be confusion to the many people for which K is 1000, and >>> have to deduce from context if this is 1000 or 1024.
    Should we all go on strike until Society accepts that the smaller
    letter, 'k', represents the smaller number, 1000, and the larger letter
    . 'K', represents the larger number, 1024?? Etc., etc.!

    Who's with me?? ;-P


    Not me. I don't think that's a great idea. It would be too hard to
    recognize the difference and too hard to remember what's what.

    Right.

    And the SI already says it is "k", lower case, which means "kilo". There
    is no "K". So that would be a new change on the standards.

    B and b are used for byte and bits, respectively. And we forget.

    --
    Cheers,
    Carlos E.R.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From jjb@21:1/5 to Carlos E. R. on Fri Feb 3 16:22:33 2023
    XPost: alt.windows7.general, alt.os.linux

    On 03-02-2023 15:48, Carlos E. R. wrote:
    On 2023-02-03 15:12, Ken Blake wrote:
    On Fri, 3 Feb 2023 22:24:01 +1100, Daniel65
    <daniel47@nomail.afraid.org> wrote:

    Carlos E. R. wrote on 3/2/23 3:12 am:
    On 2023-02-02 16:41, Ken Blake wrote:

    <Snip>

    I completely agree. But alternatively if all software and docs would >>>>> stick to the powers of two definitions of GB, etc. there would be no >>>>> doubt about what KB, MB, GB, TB, etc. meant. What's most important is >>>>> consistency in the computer world, not what definitions are used.

    But there would be confusion to the many people for which K is 1000,
    and
    have to deduce from context if this is 1000 or 1024.
    Should we all go on strike until Society accepts that the smaller
    letter, 'k', represents the smaller number, 1000, and the larger letter
    . 'K', represents the larger number, 1024?? Etc., etc.!

    Who's with me?? ;-P


    Not me. I don't think that's a great idea. It would be too hard to
    recognize the difference and too hard to remember what's what.

    Right.

    And the SI already says it is "k", lower case, which means "kilo". There
    is no "K". So that would be a new change on the standards.

    B and b are used for byte and bits, respectively. And we forget.

    Furthermore, a lowercase m stands for milli (which, for bytes, seems
    rather silly).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Eric Pozharski@21:1/5 to jjb on Sat Feb 4 12:52:33 2023
    XPost: alt.windows7.general, alt.os.linux

    with <ZE9DL.424202$Tcw8.32549@fx10.iad> jjb wrote:
    On 03-02-2023 15:48, Carlos E. R. wrote:
    On 2023-02-03 15:12, Ken Blake wrote:
    On Fri, 3 Feb 2023 22:24:01 +1100, Daniel65
    <daniel47@nomail.afraid.org> wrote:
    Carlos E. R. wrote on 3/2/23 3:12 am:
    On 2023-02-02 16:41, Ken Blake wrote:

    *SKIP*
    And the SI already says it is "k", lower case, which means "kilo".
    There is no "K". So that would be a new change on the standards.
    B and b are used for byte and bits, respectively. And we forget.
    Furthermore, a lowercase m stands for milli (which, for bytes, seems
    rather silly).

    Unless speeds or densities.

    Also, </usr/share/misc/units.dat> is worth checking. Turns out 'K' is
    already taken (it's Kelvin).

    Also, I was musing about nice tangent: is byte primitive or derived?
    If derived then does it come from mass and/or temperature? Turns out --
    dead end, 'bit' is kinda primitive, 'byte' is derived. Such a loss :(

    --
    Torvalds' goal for Linux is very simple: World Domination
    Stallman's goal for GNU is even simpler: Freedom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)