• Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

    From Dale@21:1/5 to Nikos Chantziaras on Tue Apr 18 17:10:01 2023
    Nikos Chantziaras wrote:
    On 16/04/2023 01:47, Dale wrote:
    Anything else that makes these special?  Any tips or tricks?

    Only three things.

    1. Make sure the fstrim service is active (should run every week by
    default, at least with systemd, "systemctl enable fstrim.timer".)

    2. Don't use the "discard" mount option.

    3. Use smartctl to keep track of TBW.

    People are always mentioning performance, but it's not the important
    factor for me. The more important factor is longevity. You want your
    storage device to last as long as possible, and fstrim helps, discard
    hurts.

    With "smartctl -x /dev/sda" (or whatever device your SSD is in /dev)
    pay attention to the "Data Units Written" field. Your 500GB 870 Evo
    has a TBW of 300TBW. That's "terabytes written". This is the
    manufacturer's "guarantee" that the device won't fail prior to writing
    that many terabytes to it. When you reach that, it doesn't mean it
    will fail, but it does mean you might want to start thinking of
    replacing it with a new one just in case, and then keep using it as a secondary drive.

    If you use KDE, you can also view that SMART data in the "SMART
    Status" UI (just type "SMART status" in the KDE application launcher.)





    I'm on openrc here but someone posted a link to make a cron job for
    fstrim.  When I get around to doing something with the drive, it's on my
    todo list.  I may go a month tho.  I only update my OS once a week, here lately, every other week, and given the large amount of unused space, I
    doubt it will run short of any space.  I'm still thinking on that. 

    I've read about discard.  Gonna avoid that.  ;-) 

    Given how I plan to use this drive, that should last a long time.  I'm
    just putting the OS stuff on the drive and I compile on a spinning rust
    drive and use -k to install the built packages on the live system.  That should help minimize the writes.  Since I still need a spinning rust
    drive for swap and such, I thought about putting /var on spinning rust. 
    After all, when running software, activity on /var is minimal. Thing is,
    I got a larger drive so I got plenty of space.  It could make it a
    little faster.  Maybe. 

    I read about that bytes written.  With the way you explained it, it
    confirms what I was thinking it meant.  That's a lot of data.  I
    currently have around 100TBs of drives lurking about, either in my rig
    or for backups.  I'd have to write three times that amount of data on
    that little drive.  That's a LOT of data for a 500GB drive. 

    All good info and really helpful.  Thanks. 

    Dale

    :-)  :-) 

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nikos Chantziaras@21:1/5 to Dale on Tue Apr 18 17:00:01 2023
    On 16/04/2023 01:47, Dale wrote:
    Anything else that makes these special?  Any tips or tricks?

    Only three things.

    1. Make sure the fstrim service is active (should run every week by
    default, at least with systemd, "systemctl enable fstrim.timer".)

    2. Don't use the "discard" mount option.

    3. Use smartctl to keep track of TBW.

    People are always mentioning performance, but it's not the important
    factor for me. The more important factor is longevity. You want your
    storage device to last as long as possible, and fstrim helps, discard hurts.

    With "smartctl -x /dev/sda" (or whatever device your SSD is in /dev) pay attention to the "Data Units Written" field. Your 500GB 870 Evo has a
    TBW of 300TBW. That's "terabytes written". This is the manufacturer's "guarantee" that the device won't fail prior to writing that many
    terabytes to it. When you reach that, it doesn't mean it will fail, but
    it does mean you might want to start thinking of replacing it with a new
    one just in case, and then keep using it as a secondary drive.

    If you use KDE, you can also view that SMART data in the "SMART Status"
    UI (just type "SMART status" in the KDE application launcher.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nikos Chantziaras@21:1/5 to Dale on Tue Apr 18 17:40:01 2023
    On 18/04/2023 18:05, Dale wrote:
    I compile on a spinning rust
    drive and use -k to install the built packages on the live system.  That should help minimize the writes.

    I just use tmpfs for /var/tmp/portage (16GB, I'm on 32GB RAM.) When I
    keep binary packages around, those I have on my HDD, as well as the
    distfiles:

    DISTDIR="/mnt/Data/gentoo/distfiles"
    PKGDIR="/mnt/Data/gentoo/binpkgs"


    Since I still need a spinning rust
    drive for swap and such, I thought about putting /var on spinning rust.

    Nah. The data written there is absolutely minuscule. Firefox writes like
    10 times more just while running it without even any web page loaded...
    And for actual browsing, it becomes more like 1000 times more (mostly
    the Firefox cache.)

    I wouldn't worry too much about it. I've been using my current SSD since
    2020, and I'm at 7TBW right now (out of 200 the drive is rated for) and
    I dual boot Windows and install/uninstall large games on it quite often.
    So with an average of 3TBW per year, I'd need over 80 years to reach
    200TBW :-P But I mentioned it in case your use case is different (like
    large video files or recording and whatnot.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Knecht@21:1/5 to realnc@gmail.com on Tue Apr 18 20:00:01 2023
    On Tue, Apr 18, 2023 at 7:53 AM Nikos Chantziaras <realnc@gmail.com> wrote:

    On 16/04/2023 01:47, Dale wrote:
    Anything else that makes these special? Any tips or tricks?

    Only three things.

    1. Make sure the fstrim service is active (should run every week by
    default, at least with systemd, "systemctl enable fstrim.timer".)

    2. Don't use the "discard" mount option.

    3. Use smartctl to keep track of TBW.

    People are always mentioning performance, but it's not the important
    factor for me. The more important factor is longevity. You want your
    storage device to last as long as possible, and fstrim helps, discard
    hurts.

    With "smartctl -x /dev/sda" (or whatever device your SSD is in /dev) pay attention to the "Data Units Written" field. Your 500GB 870 Evo has a
    TBW of 300TBW. That's "terabytes written". This is the manufacturer's "guarantee" that the device won't fail prior to writing that many
    terabytes to it. When you reach that, it doesn't mean it will fail, but
    it does mean you might want to start thinking of replacing it with a new
    one just in case, and then keep using it as a secondary drive.

    If you use KDE, you can also view that SMART data in the "SMART Status"
    UI (just type "SMART status" in the KDE application launcher.)


    Add to that list that Samsung only warranties the drive for 5 years
    no matter how much or how little you use it. Again, it doesn't mean
    it will die in 5 years just as it doesn't mean it will die if it has had
    more than 300TBW. However it _might_ mean that data written
    to the drive and never touched again may be gone in 5 years.

    Non-volatile memory doesn't hold it's charge forever, just as
    magnetic disk drives and magnetic tape will eventually lose their
    data.

    On all of my systems here at home, looking at the TBW values, my
    drives will go out of warranty at 5 years long before I'll get anywhere
    near the TBW spec. However I run stable, long term distros that don't
    update often and mostly use larger data files.

    <div dir="ltr"><br><br>On Tue, Apr 18, 2023 at 7:53 AM Nikos Chantziaras &lt;<a href="mailto:realnc@gmail.com">realnc@gmail.com</a>&gt; wrote:<br>&gt;<br>&gt; On 16/04/2023 01:47, Dale wrote:<br>&gt; &gt; Anything else that makes these special?  Any
    tips or tricks?<br>&gt;<br>&gt; Only three things.<br>&gt;<br>&gt; 1. Make sure the fstrim service is active (should run every week by<br>&gt; default, at least with systemd, &quot;systemctl enable fstrim.timer&quot;.)<br>&gt;<br>&gt; 2. Don&#39;t use
    the &quot;discard&quot; mount option.<br>&gt;<br>&gt; 3. Use smartctl to keep track of TBW.<br>&gt;<br>&gt; People are always mentioning performance, but it&#39;s not the important<br>&gt; factor for me. The more important factor is longevity. You want
    your<br>&gt; storage device to last as long as possible, and fstrim helps, discard hurts.<br>&gt;<br>&gt; With &quot;smartctl -x /dev/sda&quot; (or whatever device your SSD is in /dev) pay<br>&gt; attention to the &quot;Data Units Written&quot; field.
    Your 500GB 870 Evo has a<br>&gt; TBW of 300TBW. That&#39;s &quot;terabytes written&quot;. This is the manufacturer&#39;s<br>&gt; &quot;guarantee&quot; that the device won&#39;t fail prior to writing that many<br>&gt; terabytes to it. When you reach that,
    it doesn&#39;t mean it will fail, but<br>&gt; it does mean you might want to start thinking of replacing it with a new<br>&gt; one just in case, and then keep using it as a secondary drive.<br>&gt;<br>&gt; If you use KDE, you can also view that SMART
    data in the &quot;SMART Status&quot;<br>&gt; UI (just type &quot;SMART status&quot; in the KDE application launcher.)<br>&gt;<br><div><br></div><div>Add to that list that Samsung only warranties the drive for 5 years</div><div>no matter how much or how
    little you use it. Again, it doesn&#39;t mean </div><div>it will die in 5 years just as it doesn&#39;t mean it will die if it has had </div><div>more than 300TBW. However it _might_ mean that data written</div><div>to the drive and never touched again
    may be gone in 5 years.</div><div><br></div><div>Non-volatile memory doesn&#39;t hold it&#39;s charge forever, just as </div><div>magnetic disk drives and magnetic tape will eventually lose their</div><div>data. </div><div><br></div><div>On all of my
    systems here at home, looking at the TBW values, my </div><div>drives will go out of warranty at 5 years long before I&#39;ll get anywhere  </div><div>near the TBW spec. However I run stable, long term distros that don&#39;t </div><div>update often
    and mostly use larger data files. </div><div><br></div><div><br></div><div><br></div><div><br></div></div>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dale@21:1/5 to Nikos Chantziaras on Tue Apr 18 22:10:01 2023
    Nikos Chantziaras wrote:
    On 18/04/2023 18:05, Dale wrote:
    I compile on a spinning rust
    drive and use -k to install the built packages on the live system.  That
    should help minimize the writes. 

    I just use tmpfs for /var/tmp/portage (16GB, I'm on 32GB RAM.) When I
    keep binary packages around, those I have on my HDD, as well as the distfiles:

    DISTDIR="/mnt/Data/gentoo/distfiles"
    PKGDIR="/mnt/Data/gentoo/binpkgs"



    Most of mine is in tmpfs to except for the larger packages, such as
    Firefox, LOo and a couple others.  Thing is, those few large ones would
    rack up a lot of writes themselves since they are so large.  That said,
    it would be faster.  ;-) 


    Since I still need a spinning rust
    drive for swap and such, I thought about putting /var on spinning rust.

    Nah. The data written there is absolutely minuscule. Firefox writes
    like 10 times more just while running it without even any web page
    loaded... And for actual browsing, it becomes more like 1000 times
    more (mostly the Firefox cache.)

    I wouldn't worry too much about it. I've been using my current SSD
    since 2020, and I'm at 7TBW right now (out of 200 the drive is rated
    for) and I dual boot Windows and install/uninstall large games on it
    quite often. So with an average of 3TBW per year, I'd need over 80
    years to reach 200TBW :-P But I mentioned it in case your use case is different (like large video files or recording and whatnot.)


    .



    That's kinda my thinking on one side of the coin.  Having it on a
    spinning rust drive just wouldn't make much difference.  Most things
    there like log files and such are just files being added to not
    completely rewritten.  I don't think it would make much difference to
    the life span of the drive. 

    Someone mentioned 16K block size.  I've yet to find out how to do that. 
    The man page talks about the option, -b I think, but google searches
    seem to say it isn't supported.  Anyone actually set that option? 
    Recall the options that were used? 

    I did so much the past few days, I'm worthless today.  Parts of me are
    pretty angry, joints and such.  Still, I'm glad I got done what I did. 
    It's that busy time of year. 

    Thanks.

    Dale

    :-)  :-) 

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Wol@21:1/5 to Dale on Tue Apr 18 23:00:02 2023
    On 18/04/2023 21:01, Dale wrote:
    I just use tmpfs for /var/tmp/portage (16GB, I'm on 32GB RAM.) When I
    keep binary packages around, those I have on my HDD, as well as the
    distfiles:

    DISTDIR="/mnt/Data/gentoo/distfiles"
    PKGDIR="/mnt/Data/gentoo/binpkgs"


    Most of mine is in tmpfs to except for the larger packages, such as
    Firefox, LOo and a couple others.  Thing is, those few large ones would
    rack up a lot of writes themselves since they are so large.  That said,
    it would be faster.  😉

    Not sure if it's set up on my current system, but I always configured /var/tmp/portage on tmpfs. And on every disk I allocate a swap partition
    equal to twice the mobo's max memory. Three drives times 64GB times two
    is a helluva lot of swap.

    So here I would just allocate /var/tmp/portage maybe 64 - 128 GB of
    space. If the emerge fits in my current 32GB ram, then fine. If not, it
    spills over into swap. I don't have to worry about allocating extra
    space for memory hogs like Firefox, LO, Rust etc.

    And seeing as my smallest drive is 3TB, losing 128GB per drive to swap
    isn't actually that much.

    Although, as was pointed out to me, if I did suffer a denial-of-service
    attack that tried to fill memory, that amount of swap would knacker my
    system for a LONG time.

    Cheers,
    Wol

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Knecht@21:1/5 to rdalek1967@gmail.com on Tue Apr 18 23:00:02 2023
    On Tue, Apr 18, 2023 at 1:02 PM Dale <rdalek1967@gmail.com> wrote:
    <SNIP>

    Someone mentioned 16K block size.
    <SNIP>

    I mentioned it but I'm NOT suggesting it.

    It would be the -b option if you were to do it for ext4.

    I'm using the default block size (4k) on all my SSDs and M.2's and
    as I've said a couple of time, I'm going to blast past the 5 year
    warranty time long before I write too many terabytes.

    Keep it simple.

    - Mark

    <div dir="ltr"><br><br>On Tue, Apr 18, 2023 at 1:02 PM Dale &lt;<a href="mailto:rdalek1967@gmail.com">rdalek1967@gmail.com</a>&gt; wrote:<br>&lt;SNIP&gt;<br>&gt;<br>&gt; Someone mentioned 16K block size. <div>&lt;SNIP&gt;</div><div><br></div><div>I
    mentioned it but I&#39;m NOT suggesting it.</div><div><br></div><div>It would be the -b option if you were to do it for ext4.</div><div><br></div><div>I&#39;m using the default block size (4k) on all my SSDs and M.2&#39;s and</div><div>as I&#39;ve said a
    couple of time, I&#39;m going to blast past the 5 year</div><div>warranty time long before I write too many terabytes.</div><div><br></div><div>Keep it simple. </div><div><br></div><div>- Mark</div></div>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Knecht@21:1/5 to rdalek1967@gmail.com on Tue Apr 18 23:30:01 2023
    On Tue, Apr 18, 2023 at 2:15 PM Dale <rdalek1967@gmail.com> wrote:

    Mark Knecht wrote:



    On Tue, Apr 18, 2023 at 1:02 PM Dale <rdalek1967@gmail.com> wrote:
    <SNIP>

    Someone mentioned 16K block size.
    <SNIP>

    I mentioned it but I'm NOT suggesting it.

    It would be the -b option if you were to do it for ext4.

    I'm using the default block size (4k) on all my SSDs and M.2's and
    as I've said a couple of time, I'm going to blast past the 5 year
    warranty time long before I write too many terabytes.

    Keep it simple.

    - Mark

    One reason I ask, some info I found claimed it isn't even supported. It
    actually spits out a error message and doesn't create the file system. I wasn't sure if that info was outdated or what so I thought I'd ask. I
    think I'll skip that part. Just let it do its thing.

    Dale
    <SNIP>

    I'd start with something like

    mkfs.ext4 -b 16384 /dev/sdX

    and see where it leads. It's *possible* that the SSD might fight
    back, sending the OS a response that says it doesn't want to
    do that.

    It could also be a partition alignment issue, although if you
    started your partition at the default starting address I'd doubt
    that one.

    Anyway, I just wanted to be clear that I'm not worried about
    write amplification based on my system data.

    Cheers,
    Mark

    <div dir="ltr"><br><br>On Tue, Apr 18, 2023 at 2:15 PM Dale &lt;<a href="mailto:rdalek1967@gmail.com">rdalek1967@gmail.com</a>&gt; wrote:<br>&gt;<br>&gt; Mark Knecht wrote:<br>&gt;<br>&gt;<br>&gt;<br>&gt; On Tue, Apr 18, 2023 at 1:02 PM Dale &lt;<a
    href="mailto:rdalek1967@gmail.com">rdalek1967@gmail.com</a>&gt; wrote:<br>&gt; &lt;SNIP&gt;<br>&gt; &gt;<br>&gt; &gt; Someone mentioned 16K block size.<br>&gt; &lt;SNIP&gt;<br>&gt;<br>&gt; I mentioned it but I&#39;m NOT suggesting it.<br>&gt;<br>&gt; It
    would be the -b option if you were to do it for ext4.<br>&gt;<br>&gt; I&#39;m using the default block size (4k) on all my SSDs and M.2&#39;s and<br>&gt; as I&#39;ve said a couple of time, I&#39;m going to blast past the 5 year<br>&gt; warranty time long
    before I write too many terabytes.<br>&gt;<br>&gt; Keep it simple. <br>&gt;<br>&gt; - Mark<br>&gt;<br>&gt; One reason I ask, some info I found claimed it isn&#39;t even supported.  It actually spits out a error message and doesn&#39;t create the file
    system.  I wasn&#39;t sure if that info was outdated or what so I thought I&#39;d ask.  I think I&#39;ll skip that part.  Just let it do its thing. <br>&gt;<br>&gt; Dale<br><div>&lt;SNIP&gt;</div><br>I&#39;d start with something like <br><br>mkfs.ext4
    -b 16384 /dev/sdX<div><br></div><div>and see where it leads. It&#39;s *possible* that the SSD might fight </div><div>back, sending the OS a response that says it doesn&#39;t want to </div><div>do that.</div><div><br></div><div>It could also be a
    partition alignment issue, although if you</div><div>started your partition at the default starting address I&#39;d doubt </div><div>that one.</div><div><br></div><div>Anyway, I just wanted to be clear that I&#39;m not worried about</div><div>write
    amplification based on my system data.</div><div><br></div><div>Cheers,</div><div>Mark</div></div>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dale@21:1/5 to Mark Knecht on Tue Apr 18 23:20:01 2023
    This is a multi-part message in MIME format.
    Mark Knecht wrote:


    On Tue, Apr 18, 2023 at 1:02 PM Dale <rdalek1967@gmail.com <mailto:rdalek1967@gmail.com>> wrote:
    <SNIP>

    Someone mentioned 16K block size.
    <SNIP>

    I mentioned it but I'm NOT suggesting it.

    It would be the -b option if you were to do it for ext4.

    I'm using the default block size (4k) on all my SSDs and M.2's and
    as I've said a couple of time, I'm going to blast past the 5 year warranty time long before I write too many terabytes.

    Keep it simple. 

    - Mark


    One reason I ask, some info I found claimed it isn't even supported.  It actually spits out a error message and doesn't create the file system. 
    I wasn't sure if that info was outdated or what so I thought I'd ask.  I
    think I'll skip that part.  Just let it do its thing. 

    Dale

    :-)  :-) 

    P. S.  Kudos to however came up with Tylenol. 

    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
    </head>
    <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">Mark Knecht wrote:<br>
    </div>
    <blockquote type="cite" cite="mid:CAK2H+eeaaS3xpzfNi-nJYUNaVVfhQ490QaxHO8E1QbBgXoAXZA@mail.gmail.com">
    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
    <div dir="ltr"><br>
    <br>
    On Tue, Apr 18, 2023 at 1:02 PM Dale &lt;<a
    href="mailto:rdalek1967@gmail.com" moz-do-not-send="true">rdalek1967@gmail.com</a>&gt;
    wrote:<br>
    &lt;SNIP&gt;<br>
    &gt;<br>
    &gt; Someone mentioned 16K block size.
    <div>&lt;SNIP&gt;</div>
    <div><br>
    </div>
    <div>I mentioned it but I'm NOT suggesting it.</div>
    <div><br>
    </div>
    <div>It would be the -b option if you were to do it for ext4.</div>
    <div><br>
    </div>
    <div>I'm using the default block size (4k) on all my SSDs and
    M.2's and</div>
    <div>as I've said a couple of time, I'm going to blast past the
    5 year</div>
    <div>warranty time long before I write too many terabytes.</div>
    <div><br>
    </div>
    <div>Keep it simple. </div>
    <div><br>
    </div>
    <div>- Mark</div>
    </div>
    </blockquote>
    <br>
    <br>
    One reason I ask, some info I found claimed it isn't even
    supported.  It actually spits out a error message and doesn't create
    the file system.  I wasn't sure if that info was outdated or what so
    I thought I'd ask.  I think I'll skip that part.  Just let it do its
    thing.  <br>
    <br>
    Dale <br>
    <br>
    :-)  :-)  <br>
    <br>
    P. S.  Kudos to however came up with Tylenol.  <br>
    </body>
    </html>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Steinmetzger@21:1/5 to All on Wed Apr 19 00:20:01 2023
    Am Tue, Apr 18, 2023 at 10:05:27AM -0500 schrieb Dale:

    Given how I plan to use this drive, that should last a long time.  I'm
    just putting the OS stuff on the drive and I compile on a spinning rust
    drive and use -k to install the built packages on the live system.  That should help minimize the writes.

    Well, 300 TB over 5 years is 60 TB per year, or 165 GB per day. Every day. I’d say don’t worry. Besides: endurance tests showed that SSDs were able to
    withstand multiples of their guaranteed TBW until they actually failed (of course there are always exceptions to the rule).

    I read about that bytes written.  With the way you explained it, it
    confirms what I was thinking it meant.  That's a lot of data.  I
    currently have around 100TBs of drives lurking about, either in my rig
    or for backups.  I'd have to write three times that amount of data on
    that little drive.  That's a LOT of data for a 500GB drive. 

    If you use ext4, run `dumpe2fs -h /dev/your-root-partition | grep Lifetime`
    to see how much data has been written to that partition since you formatted it. Just to get an idea of what you are looking at on your setup.

    --
    Grüße | Greetings | Qapla’
    Please do not share anything from, with or about me on any social network.

    What woman is looking for a man who is looking for a woman looking for a man?

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmQ/FyYACgkQizG+tUDU MMrNtg/+Mk4kZ6IzWjIT1R2Y5IWbDGokJwfGmM6KiXjbNVUt/7KtJGR9O2EjoQMl lUpc20smpotPOtFt0JiiZiu6+eymIt7bRFr+w//sNnC/Bh5l4EYjqQnR4MfneK8y 08pI3MgXqani7k04nqYo7+Yq8g2JJ3Vu7x6HpyWwWx1GhEvLNNXC574T4UR3zIZI AiGXIRDcaEpYXLkxHME3CjdOJ3+GCwjp5CGzJUmU9vtddIhG0PzWpMANv2V8M35G AR8dNgSHyC7+Q2CamTCRKLSlCQ2XTcrV7TJ3r5iuWDiBcogA5gl+XIyCcU7yvMIm gJNWNsDltjJ8yOe/l82Zorhv7qNOrJBzWZ6z92+4FN8AD/1Y7BBZ0lT2+3pLrMo8 yGaokv58BkRVqqbU+8bpzWby/AEmWoqWh0LjCqHuLo17MMDkPqI70Gl7s6dSKoj/ 6gRAzloVcDfGPr+8pxv6ilOppZlfB/mDNqKJLxfLKETPCri3SceC29VuDUKdA5y1 FYRadilpsa9EqrI+6TJbeFoBFZmUAQC7jXLDbru9wu1ad147a056nzCUPNcK5QG5 Pk+f3H11Hk//65glYtuEQAuNbQdPgqo1lJNlGjVSzs8IbgqvvHgvTBs2QFxMXCP3 G+m7SX8K1FWVR5+9GsSDngxdFgUWdIQuCJFSBaJvjhZbchd1sdQ=
    =Zai5
    -----END
  • From Frank Steinmetzger@21:1/5 to All on Wed Apr 19 00:50:01 2023
    Am Wed, Apr 19, 2023 at 12:18:14AM +0200 schrieb Frank Steinmetzger:

    If you use ext4, run `dumpe2fs -h /dev/your-root-partition | grep Lifetime` to see how much data has been written to that partition since you formatted it. Just to get an idea of what you are looking at on your setup.

    For comparison:

    I’m writing from my Surface Go 1 right now. It’s running Arch linux with KDE
    and I don’t use it very often (meaning, I don’t update it as often as my main rig). But updates in Arch linux can be volume-intensive, especially because there are frequent kernel updates (I’ve had over 50 since June 2020, each accounting for over 300 MB of writes), and other updates of big
    packages if a dependency like python changes. In Gentoo you do revdep-rebuild, binary distros ship new versions of all affected packages, like libreoffice, or Qt, or texlive.

    Anyways, the root partition measures 22 G and has a lifetime write of 571 GB in almost three years. The home partition (97 GB in size) is at 877 GB. That seems actually a lot, because I don’t really do that much high-volume stuff there. My media archive with all the photos and music and such sits on a separate data partition, which is not synced to the Surface due to its small SSD of only 128 GB.

    --
    Grüße | Greetings | Qapla’
    Please do not share anything from, with or about me on any social network.

    We shall be landing shortly.
    Please return your stewardess to the upright position.

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmQ/HJMACgkQizG+tUDU MMqfIA//QWISGt3aaaDIqFuoaFh5aG8dmf4kWlEdzSuifoavxvzHKWwBIG/IKFcK Dba6ozSYUWM795GhXjI39HJ8k63+ANQpNpbVeliCrFE+6HzWyhltkIUewAq0WAMX VvsvTEooajc0MiK+4WmJSdG7m26khaMWR85vK1zYfpncmsZLvaSHB/ahdRiynieT pPU/Q85Kl1ijwq8vISKPJfPlB2eJKeRy3fNrv19xiQIHZIgh68nfZu0/ZTjGFYkK zMur/jfoKkax8GsoEumoqKrSnNhwwHSaIV6QhFWlMMwn3QIQHXFfwPLsP+SAlO4q 6/GQWL/Gfs4q2ZFKLYxw5tC+V86AmN2x2ksu2CN2cmsrXwz8BhKNPaHHAdBHmQw9 XijbNJbZQQ/p6PpkuzdS9BdaVY6LrMdKgM87Jc5to2wB5FD2QoorKB2/gpaN6Sdf lPa+hSzCyHevrwsEFSUt8G9n+o0dZppUbg1s7sdRWDOK4MLdRsooyvV0JZVrpU+v ESZRqBgCr7X3pR/daDfutChgDrqMLcvmJgChdRASMenL/dXS4tRcpsseCELlNYru tqlxWxF9DtnRt76Dd5ZlUjvzelTcrwhrDoVqlKVJenmYFTP2ZR0ziKNB584ieDWn q1XQiqJ9middMnItCVMD2jLQmKJytUfk0FsjSvs+0Uo9n2YFl/k=
    =G0gH
    -
  • From Wols Lists@21:1/5 to Frank Steinmetzger on Wed Apr 19 01:10:02 2023
    On 18/04/2023 23:13, Frank Steinmetzger wrote:
    /var/tmp/portage on tmpfs. And on every disk I allocate a swap partition
    equal to twice the mobo's max memory. Three drives times 64GB times two is a >> helluva lot of swap.

    Uhm … why? The moniker of swap = 2×RAM comes from times when RAM was scarce.
    What do you need so much swap for, especially with 32 GB RAM to begin with? And if you really do have use cases which cause regular swapping, it’d be less painful if you just added some more RAM.

    Actually, if you know your history, it does NOT come from "times when
    RAM was scarce". It comes from the original Unix swap algorithm which
    NEEDED twice ram.

    I've searched (unsuccessfully) on LWN for the story, but at some point
    (I think round about kernel 2.4.10) Linus ripped out all the ugly "optimisation" code, and anybody who ran the vanilla kernel with "swap
    but less than twice ram" found it crashed the instant the system touched
    swap. Linus was not sympathetic to people who hadn't read the release
    notes ...

    Andrea Arcangeli and someone else (I've forgotten who) wrote two
    competing memory managers in classic "Linus managerial style" as he
    played them off against each other.

    I've always allocated swap like that pretty much ever since. Maybe the
    new algorithm hasn't got the old wanting twice ram, maybe it has, I
    never found out, but I've not changed that habit.

    (NB This system is pretty recent, my previous system had iirc 8GB (and a
    maxed out value of 16GB), not enough for a lot of the bigger programs.

    Before that point, I gather it actually made a difference to the
    efficiency of the system as the optimisations kicked in, but everybody
    believed it was an old wives tale - until Linus did that ...

    Cheers,
    Wol

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dale@21:1/5 to Frank Steinmetzger on Wed Apr 19 03:50:01 2023
    Frank Steinmetzger wrote:
    Am Tue, Apr 18, 2023 at 10:05:27AM -0500 schrieb Dale:

    Given how I plan to use this drive, that should last a long time.  I'm
    just putting the OS stuff on the drive and I compile on a spinning rust
    drive and use -k to install the built packages on the live system.  That
    should help minimize the writes.
    Well, 300 TB over 5 years is 60 TB per year, or 165 GB per day. Every day. I’d say don’t worry. Besides: endurance tests showed that SSDs were able to
    withstand multiples of their guaranteed TBW until they actually failed (of course there are always exceptions to the rule).

    I read about that bytes written.  With the way you explained it, it
    confirms what I was thinking it meant.  That's a lot of data.  I
    currently have around 100TBs of drives lurking about, either in my rig
    or for backups.  I'd have to write three times that amount of data on
    that little drive.  That's a LOT of data for a 500GB drive. 
    If you use ext4, run `dumpe2fs -h /dev/your-root-partition | grep Lifetime` to see how much data has been written to that partition since you formatted it. Just to get an idea of what you are looking at on your setup.



    I skipped the grep part and looked at the whole output.  I don't recall
    ever seeing that command before so I wanted to see what it did.  Dang,
    lots of info. 

    Filesystem created:       Sun Apr 15 03:24:56 2012
    Lifetime writes:          993 GB

    That's for the main / partition.  I have /usr on it's own partition tho. 

    Filesystem created:       Sun Apr 15 03:25:48 2012
    Lifetime writes:          1063 GB

    I'd think that / and /usr would be the most changed parts of the OS. 
    After all, /bin and /sbin are on / too as is /lib*.  If that is even
    remotely correct, both would only be around 2TBs.  That dang thing may
    outlive me even if I don't try to minimize writes.  ROFLMBO

    Now that says a lot.  Really nice info. 

    Thanks.

    Dale

    :-)  :-) 

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nikos Chantziaras@21:1/5 to Dale on Wed Apr 19 10:10:01 2023
    On 19/04/2023 04:45, Dale wrote:
    Filesystem created:       Sun Apr 15 03:24:56 2012
    Lifetime writes:          993 GB

    That's for the main / partition.  I have /usr on it's own partition tho.

    Filesystem created:       Sun Apr 15 03:25:48 2012
    Lifetime writes:          1063 GB

    I'd think that / and /usr would be the most changed parts of the OS.
    After all, /bin and /sbin are on / too as is /lib*.  If that is even remotely correct, both would only be around 2TBs.  That dang thing may outlive me even if I don't try to minimize writes.  ROFLMBO

    I believe this only shows the lifetime writes to that particular
    filesystem since it's been created?

    You can use smartctl here too. At least on my HDD, the HDD's firmware
    keeps tracks of the lifetime logical sectors written. Logical sectors
    are 512 bytes (physical are 4096). The logical sector size is also shown
    by smartctl.

    With my HDD:

    # smartctl -x /dev/sda | grep -i 'sector size'
    Sector Sizes: 512 bytes logical, 4096 bytes physical

    Then to get the total logical sectors written:

    # smartctl -x /dev/sda | grep -i 'sectors written'
    0x01 0x018 6 37989289142 --- Logical Sectors Written

    Converting that to terabytes written with "bc -l":

    37988855446 * 512 / 1024^4
    17.68993933033198118209

    Almost 18TB.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dale@21:1/5 to Nikos Chantziaras on Wed Apr 19 11:50:01 2023
    Nikos Chantziaras wrote:
    On 19/04/2023 04:45, Dale wrote:
    Filesystem created:       Sun Apr 15 03:24:56 2012
    Lifetime writes:          993 GB

    That's for the main / partition.  I have /usr on it's own partition tho.

    Filesystem created:       Sun Apr 15 03:25:48 2012
    Lifetime writes:          1063 GB

    I'd think that / and /usr would be the most changed parts of the OS.
    After all, /bin and /sbin are on / too as is /lib*.  If that is even
    remotely correct, both would only be around 2TBs.  That dang thing may
    outlive me even if I don't try to minimize writes.  ROFLMBO

    I believe this only shows the lifetime writes to that particular
    filesystem since it's been created?

    You can use smartctl here too. At least on my HDD, the HDD's firmware
    keeps tracks of the lifetime logical sectors written. Logical sectors
    are 512 bytes (physical are 4096). The logical sector size is also
    shown by smartctl.

    With my HDD:

      # smartctl -x /dev/sda | grep -i 'sector size'
      Sector Sizes:     512 bytes logical, 4096 bytes physical

    Then to get the total logical sectors written:

      # smartctl -x /dev/sda | grep -i 'sectors written'
      0x01  0x018  6     37989289142  ---  Logical Sectors Written

    Converting that to terabytes written with "bc -l":

      37988855446 * 512 / 1024^4
      17.68993933033198118209

    Almost 18TB.





    I'm sure it is since the file system was created.  Look at the year
    tho.  It's about 11 years ago when I first built this rig.  If I've only written that amount of data to my current drive over the last 11 years,
    the SSD drive should last for many, MANY, years, decades even.  At this
    point, I should worry more about something besides it running out of
    write cycles.  LOL  I'd think technology changes will bring it to its
    end of life rather than write cycles. 

    Eventually, I'll have time to put it to use.  To much going on right now tho. 

    Dale

    :-)  :-) 

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter Humphrey@21:1/5 to All on Wed Apr 19 12:40:01 2023
    On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:

    With my HDD:

    # smartctl -x /dev/sda | grep -i 'sector size'
    Sector Sizes: 512 bytes logical, 4096 bytes physical

    Or, with an NVMe drive:

    # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
    Supported LBA Sizes (NSID 0x1)
    Id Fmt Data Metadt Rel_Perf
    0 + 512 0 0

    :)

    --
    Regards,
    Peter.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Knecht@21:1/5 to All on Wed Apr 19 19:20:01 2023
    On Wed, Apr 19, 2023 at 3:35 AM Peter Humphrey <peter@prh.myzen.co.uk>
    wrote:

    On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:

    With my HDD:

    # smartctl -x /dev/sda | grep -i 'sector size'
    Sector Sizes: 512 bytes logical, 4096 bytes physical

    Or, with an NVMe drive:

    # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
    Supported LBA Sizes (NSID 0x1)
    Id Fmt Data Metadt Rel_Perf
    0 + 512 0 0


    That command, on my system anyway, does pick up all the
    LBA sizes:

    1) Windows - 1TB Sabrent:

    Supported LBA Sizes (NSID 0x1)
    Id Fmt Data Metadt Rel_Perf
    0 + 512 0 2
    1 - 4096 0 1

    Data Units Read: 8,907,599 [4.56 TB]
    Data Units Written: 4,132,726 [2.11 TB]
    Host Read Commands: 78,849,158
    Host Write Commands: 55,570,509

    Error Information (NVMe Log 0x01, 16 of 63 entries)
    Num ErrCount SQId CmdId Status PELoc LBA NSID VS
    0 1406 0 0x600b 0x4004 0x028 0 0 -

    2) Kubuntu - 1TB Crucial

    Supported LBA Sizes (NSID 0x1)
    Id Fmt Data Metadt Rel_Perf
    0 + 512 0 1
    1 - 4096 0 0

    Data Units Read: 28,823,498 [14.7 TB]
    Data Units Written: 28,560,888 [14.6 TB]
    Host Read Commands: 137,865,594
    Host Write Commands: 209,406,594

    Error Information (NVMe Log 0x01, 16 of 16 entries)
    Num ErrCount SQId CmdId Status PELoc LBA NSID VS
    0 1735 0 0x100c 0x4005 0x028 0 0 -

    3) Scratch pad - 128GB SSSTC (No name) M.2 chip mounted on Joylifeboard
    PCIe card

    Supported LBA Sizes (NSID 0x1)
    Id Fmt Data Metadt Rel_Perf
    0 + 512 0 0

    Data Units Read: 363,470 [186 GB]
    Data Units Written: 454,447 [232 GB]
    Host Read Commands: 2,832,367
    Host Write Commands: 2,833,717

    Error Information (NVMe Log 0x01, 16 of 64 entries)
    No Errors Logged

    NOTE: When I first got interested in M.2 I bought a PCI Express
    card and an M.2 chip just to use for a while with Astrophotography
    files which tend to be 24MB coming out of my camera but grow
    to possibly 1GB as processing occurs. Total cost was about
    $30 and might be a possible solution for Gentoo users who
    want a faster scratch pad for system updates. Even this
    second rate hardware has been reliable and it pretty fast:

    https://www.amazon.com/gp/product/B09K4YXN33 https://www.amazon.com/gp/product/B08ZB6YVPW

    mark@science2:~$ sudo hdparm -tT /dev/nvme2n1
    /dev/nvme2n1:
    Timing cached reads: 48164 MB in 1.99 seconds = 24144.06 MB/sec
    Timing buffered disk reads: 1210 MB in 3.00 seconds = 403.08 MB/sec mark@science2:~$

    Although not as fast as M.2 on the MB where the Sabrent M.2 blows
    away the Crucial M.2

    mark@science2:~$ sudo hdparm -tT /dev/nvme0n1

    /dev/nvme0n1:
    Timing cached reads: 47660 MB in 1.99 seconds = 23890.55 MB/sec
    Timing buffered disk reads: 5452 MB in 3.00 seconds = 1817.10 MB/sec mark@science2:~$ sudo hdparm -tT /dev/nvme1n1

    /dev/nvme1n1:
    Timing cached reads: 47310 MB in 1.99 seconds = 23714.77 MB/sec
    Timing buffered disk reads: 1932 MB in 3.00 seconds = 643.49 MB/sec mark@science2:~$

    <div dir="ltr"><br><br>On Wed, Apr 19, 2023 at 3:35 AM Peter Humphrey &lt;<a href="mailto:peter@prh.myzen.co.uk">peter@prh.myzen.co.uk</a>&gt; wrote:<br>&gt;<br>&gt; On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:<br>&gt;<br>&gt; &gt;
    With my HDD:<br>&gt; &gt;<br>&gt; &gt;    # smartctl -x /dev/sda | grep -i &#39;sector size&#39;<br>&gt; &gt;    Sector Sizes:     512 bytes logical, 4096 bytes physical<br>&gt;<br>&gt; Or, with an NVMe drive:<br>&gt;<br>&gt; # smartctl -x /dev/
    nvme1n1 | grep -A2 &#39;Supported LBA Sizes&#39;<br>&gt; Supported LBA Sizes (NSID 0x1)<br>&gt; Id Fmt  Data  Metadt  Rel_Perf<br>&gt;  0 +     512       0         0<br>&gt;<br><br>That command, on my system anyway, does pick up all the<br>
    LBA sizes:<br><br>1) Windows - 1TB Sabrent:<br><br>Supported LBA Sizes (NSID 0x1)<br>Id Fmt  Data  Metadt  Rel_Perf<br>0 +     512       0         2<br>1 -    4096       0         1<br><br>Data Units Read:                  
    8,907,599 [4.56 TB]<br>Data Units Written:                 4,132,726 [2.11 TB]<br>Host Read Commands:                 78,849,158<br>Host Write Commands:                55,570,509<br><br>Error Information (NVMe Log 0x01, 16 of 63
    entries)<br>Num   ErrCount  SQId   CmdId  Status  PELoc          LBA  NSID    VS<br> 0       1406     0  0x600b  0x4004  0x028            0     0     -<br><br>2) Kubuntu - 1TB Crucial<br><br>Supported LBA Sizes (NSID 0x1)<
    Id Fmt  Data  Metadt  Rel_Perf<br>0 +     512       0         1<br>1 -    4096       0         0<br><br>Data Units Read:                    28,823,498 [14.7 TB]<br>Data Units Written:                 28,560,888 [
    14.6 TB]<br>Host Read Commands:                 137,865,594<br>Host Write Commands:                209,406,594<br><br>Error Information (NVMe Log 0x01, 16 of 16 entries)<br>Num   ErrCount  SQId   CmdId  Status  PELoc          
    LBA  NSID    VS<br> 0       1735     0  0x100c  0x4005  0x028            0     0     -<br><br>3) Scratch pad - 128GB SSSTC (No name) M.2 chip mounted on Joylifeboard PCIe card<br><br>Supported LBA Sizes (NSID 0x1)<br>Id Fmt  Data  
    Metadt  Rel_Perf<br>0 +     512       0         0<br><br>Data Units Read:                    363,470 [186 GB]<br>Data Units Written:                 454,447 [232 GB]<br>Host Read Commands:                 2,832,367<br>
    Host Write Commands:                2,833,717<br><br>Error Information (NVMe Log 0x01, 16 of 64 entries)<br>No Errors Logged<br><br>NOTE: When I first got interested in M.2 I bought a PCI Express<br>card and an M.2 chip just to use for a while
    with Astrophotography<br>files which tend to be 24MB coming out of my camera but grow<br>to possibly 1GB as processing occurs. Total cost was about<br>$30 and might be a possible solution for Gentoo users who<br>want a faster scratch pad for system
    updates. Even this<br>second rate hardware has been reliable and it pretty fast:<br> <br><a href="https://www.amazon.com/gp/product/B09K4YXN33">https://www.amazon.com/gp/product/B09K4YXN33</a><br><a href="https://www.amazon.com/gp/product/B08ZB6YVPW">
    https://www.amazon.com/gp/product/B08ZB6YVPW</a><div><br>mark@science2:~$ sudo hdparm -tT /dev/nvme2n1    <br>/dev/nvme2n1:<br>Timing cached reads:   48164 MB in  1.99 seconds = 24144.06 MB/sec<br>Timing buffered disk reads: 1210 MB in  3.00 seconds
    = 403.08 MB/sec<br>mark@science2:~$<br><br>Although not as fast as M.2 on the MB where the Sabrent M.2 blows<br>away the Crucial M.2<br><br>mark@science2:~$ sudo hdparm -tT /dev/nvme0n1<br><br>/dev/nvme0n1:<br>Timing cached reads:   47660 MB in  1.99
    seconds = 23890.55 MB/sec<br>Timing buffered disk reads: 5452 MB in  3.00 seconds = 1817.10 MB/sec<br>mark@science2:~$ sudo hdparm -tT /dev/nvme1n1<br><br>/dev/nvme1n1:<br>Timing cached reads:   47310 MB in  1.99 seconds = 23714.77 MB/sec<br>Timing
    buffered disk reads: 1932 MB in  3.00 seconds = 643.49 MB/sec<br>mark@science2:~$<br><br></div><div><br></div></div>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dale@21:1/5 to Peter Humphrey on Wed Apr 19 20:00:01 2023
    Peter Humphrey wrote:
    On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:

    With my HDD:

    # smartctl -x /dev/sda | grep -i 'sector size'
    Sector Sizes: 512 bytes logical, 4096 bytes physical
    Or, with an NVMe drive:

    # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
    Supported LBA Sizes (NSID 0x1)
    Id Fmt Data Metadt Rel_Perf
    0 + 512 0 0

    :)


    When I run that command, sdd is my SDD drive, ironic I know.  Anyway, it doesn't show block sizes.  It returns nothing.

    root@fireball / # smartctl -x /dev/sdd  | grep -A2 'Supported LBA Sizes' root@fireball / #

    This is the FULL output, in case it is hidden somewhere grep and I can't find.  Keep in mind, this is a blank drive with no partitions or anything. 

    root@fireball / # smartctl -x /dev/sdd
    smartctl 7.3 2022-02-28 r5338 [x86_64-linux-5.14.15-gentoo] (local build) Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

    === START OF INFORMATION SECTION ===
    Model Family:     Samsung based SSDs
    Device Model:     Samsung SSD 870 EVO 500GB
    Serial Number:    S6PWNXXXXXXXXXXX
    LU WWN Device Id: 5 002538 XXXXXXXXXX
    Firmware Version: SVT01B6Q
    User Capacity:    500,107,862,016 bytes [500 GB]
    Sector Size:      512 bytes logical/physical
    Rotation Rate:    Solid State Device
    Form Factor:      2.5 inches
    TRIM Command:     Available, deterministic, zeroed
    Device is:        In smartctl database 7.3/5440
    ATA Version is:   ACS-4 T13/BSR INCITS 529 revision 5
    SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
    Local Time is:    Wed Apr 19 12:57:03 2023 CDT
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled
    AAM feature is:   Unavailable
    APM feature is:   Unavailable
    Rd look-ahead is: Enabled
    Write cache is:   Enabled
    DSN feature is:   Unavailable
    ATA Security is:  Disabled, frozen [SEC2]
    Wt Cache Reorder: Enabled

    === START OF READ SMART DATA SECTION ===
    SMART overall-health self-assessment test result: PASSED

    General SMART Values:
    Offline data collection status:  (0x80) Offline data collection activity                                         was never started.
                                            Auto Offline Data Collection:
    Enabled.
    Self-test execution status:      (   0) The previous self-test routine completed                                         without error or no self-test
    has ever                                         been run.
    Total time to complete Offline
    data collection:                (    0) seconds.
    Offline data collection
    capabilities:                    (0x53) SMART execute Offline immediate.
                                            Auto Offline data collection
    on/off support.                                         Suspend Offline collection upon new
                                            command.
                                            No Offline surface scan supported.
                                            Self-test supported.
                                            No Conveyance Self-test supported.
                                            Selective Self-test supported.
    SMART capabilities:            (0x0003) Saves SMART data before entering
                                            power-saving mode.
                                            Supports SMART auto save timer.
    Error logging capability:        (0x01) Error logging supported.                                         General Purpose Logging supported.
    Short self-test routine
    recommended polling time:        (   2) minutes.
    Extended self-test routine
    recommended polling time:        (  85) minutes.
    SCT capabilities:              (0x003d) SCT Status supported.                                         SCT Error Recovery Control
    supported.                                         SCT Feature Control supported.
                                            SCT Data Table supported.

    SMART Attributes Data Structure revision number: 1
    Vendor Specific SMART Attributes with Thresholds:
    ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
      5 Reallocated_Sector_Ct   PO--CK   100   100   010    -    0   9 Power_On_Hours          -O--CK   099   099   000    -    75
     12 Power_Cycle_Count       -O--CK   099   099   000    -    3
    177 Wear_Leveling_Count     PO--C-   100   100   000    -    0 179 Used_Rsvd_Blk_Cnt_Tot   PO--C-   100   100   010    -    0 181 Program_Fail_Cnt_Total  -O--CK   100   100   010    -    0
    182 Erase_Fail_Count_Total  -O--CK   100   100   010    -    0
    183 Runtime_Bad_Block       PO--C-   100   100   010    -    0
    187 Uncorrectable_Error_Cnt -O--CK   100   100   000    -    0
    190 Airflow_Temperature_Cel -O--CK   077   069   000    -    23
    195 ECC_Error_Rate          -O-RC-   200   200   000    -    0
    199 CRC_Error_Count         -OSRCK   100   100   000    -    0
    235 POR_Recovery_Count      -O--C-   099   099   000    -    1 241 Total_LBAs_Written      -O--CK   100   100   000    -    0                             ||||||_ K auto-keep                             |||||__ C event count                             ||||___ R error rate                             |||____ S speed/performance
                                ||_____ O updated online                             |______ P prefailure warning

    General Purpose Log Directory Version 1
    SMART           Log Directory Version 1 [multi-sector log support] Address    Access  R/W   Size  Description
    0x00       GPL,SL  R/O      1  Log Directory 0x01           SL  R/O      1  Summary SMART error log 0x02           SL  R/O      1  Comprehensive SMART error log 0x03       GPL     R/O      1  Ext. Comprehensive SMART error log
    0x04       GPL,SL  R/O      8  Device Statistics log 0x06           SL  R/O      1  SMART self-test log 0x07       GPL     R/O      1  Extended self-test log 0x09           SL  R/W      1  Selective self-test log 0x10       GPL     R/O      1  NCQ Command Error log 0x11       GPL     R/O      1  SATA Phy Event Counters log 0x13       GPL     R/O      1  SATA NCQ Send and Receive log 0x30       GPL,SL  R/O      9  IDENTIFY DEVICE data log 0x80-0x9f  GPL,SL  R/W     16  Host vendor specific log 0xa1           SL  VS      16  Device vendor specific log 0xa5           SL  VS      16  Device vendor specific log 0xce           SL  VS      16  Device vendor specific log 0xe0       GPL,SL  R/W      1  SCT Command/Status
    0xe1       GPL,SL  R/W      1  SCT Data Transfer

    SMART Extended Comprehensive Error Log Version: 1 (1 sectors)
    No Errors Logged

    SMART Extended Self-test Log Version: 1 (1 sectors)
    Num  Test_Description    Status                  Remaining 
    LifeTime(hours)  LBA_of_first_error
    # 1  Short offline       Completed without error       00%       
    74         -
    # 2  Short offline       Completed without error       00%       
    50         -
    # 3  Short offline       Completed without error       00%       
    26         -
    # 4  Short offline       Completed without error       00%        
    2         -

    SMART Selective self-test log data structure revision number 1
     SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
        1        0        0  Not_testing
        2        0        0  Not_testing
        3        0        0  Not_testing
        4        0        0  Not_testing
        5        0        0  Not_testing
      256        0    65535  Read_scanning was never started
    Selective self-test flags (0x0):
      After scanning selected spans, do NOT read-scan remainder of disk.
    If Selective self-test is pending on power-up, resume after 0 minute delay.

    SCT Status Version:                  3
    SCT Version (vendor specific):       256 (0x0100)
    Device State:                        Active (0)
    Current Temperature:                    23 Celsius
    Power Cycle Min/Max Temperature:     20/40 Celsius
    Lifetime    Min/Max Temperature:     20/40 Celsius
    Specified Max Operating Temperature:    70 Celsius
    Under/Over Temperature Limit Count:   0/0
    SMART Status:                        0xc24f (PASSED)

    SCT Temperature History Version:     2
    Temperature Sampling Period:         10 minutes
    Temperature Logging Interval:        10 minutes
    Min/Max recommended Temperature:      0/70 Celsius
    Min/Max Temperature Limit:            0/70 Celsius
    Temperature History Size (Index):    128 (80)

    Index    Estimated Time   Temperature Celsius
      81    2023-04-18 15:40    23  ****
     ...    ..(  2 skipped).    ..  ****
      84    2023-04-18 16:10    23  ****
      85    2023-04-18 16:20    24  *****
      86    2023-04-18 16:30    24  *****
      87    2023-04-18 16:40    24  *****
      88    2023-04-18 16:50    23  ****
      89    2023-04-18 17:00    23  ****
      90    2023-04-18 17:10    24  *****
     ...    ..(  2 skipped).    ..  *****
      93    2023-04-18 17:40    24  *****
      94    2023-04-18 17:50    23  ****
      95    2023-04-18 18:00    24  *****
      96    2023-04-18 18:10    24  *****
      97    2023-04-18 18:20    24  *****
      98    2023-04-18 18:30    23  ****
      99    2023-04-18 18:40    24  *****
     100    2023-04-18 18:50    24  *****
     101    2023-04-18 19:00    24  *****
     102    2023-04-18 19:10    23  ****
     103    2023-04-18 19:20    24  *****
     104    2023-04-18 19:30    23  ****
     105    2023-04-18 19:40    24  *****
     ...    ..( 15 skipped).    ..  *****
     121    2023-04-18 22:20    24  *****
     122    2023-04-18 22:30    23  ****
     ...    ..(  5 skipped).    ..  ****
       0    2023-04-18 23:30    23  ****
       1    2023-04-18 23:40    22  ***
       2    2023-04-18 23:50    22  ***
       3    2023-04-19 00:00    23  ****
       4    2023-04-19 00:10    22  ***
       5    2023-04-19 00:20    23  ****
     ...    ..( 22 skipped).    ..  ****
      28    2023-04-19 04:10    23  ****
      29    2023-04-19 04:20    22  ***
     ...    ..( 30 skipped).    ..  ***
      60    2023-04-19 09:30    22  ***
      61    2023-04-19 09:40    21  **
      62    2023-04-19 09:50    21  **
      63    2023-04-19 10:00    22  ***
     ...    ..(  7 skipped).    ..  ***
      71    2023-04-19 11:20    22  ***
      72    2023-04-19 11:30    23  ****
     ...    ..(  2 skipped).    ..  ****
      75    2023-04-19 12:00    23  ****
      76    2023-04-19 12:10    25  ******
      77    2023-04-19 12:20    23  ****
     ...    ..(  2 skipped).    ..  ****
      80    2023-04-19 12:50    23  ****

    SCT Error Recovery Control:
               Read: Disabled
              Write: Disabled

    Device Statistics (GP Log 0x04)
    Page  Offset Size        Value Flags Description
    0x01  =====  =               =  ===  == General Statistics (rev 1) ==
    0x01  0x008  4               3  ---  Lifetime Power-On Resets 0x01  0x010  4              75  ---  Power-on Hours
    0x01  0x018  6               0  ---  Logical Sectors Written 0x01  0x020  6               0  ---  Number of Write Commands 0x01  0x028  6           22176  ---  Logical Sectors Read
    0x01  0x030  6             450  ---  Number of Read Commands 0x01  0x038  6         1679000  ---  Date and Time TimeStamp
    0x04  =====  =               =  ===  == General Errors Statistics (rev 1) ==
    0x04  0x008  4               0  ---  Number of Reported Uncorrectable Errors
    0x04  0x010  4               0  ---  Resets Between Cmd Acceptance and
    Completion
    0x05  =====  =               =  ===  == Temperature Statistics (rev 1) ==
    0x05  0x008  1              23  ---  Current Temperature 0x05  0x020  1              40  ---  Highest Temperature 0x05  0x028  1              20  ---  Lowest Temperature
    0x05  0x058  1              70  ---  Specified Maximum Operating Temperature
    0x06  =====  =               =  ===  == Transport Statistics (rev 1) ==
    0x06  0x008  4               4  ---  Number of Hardware Resets
    0x06  0x010  4               0  ---  Number of ASR Events 0x06  0x018  4               0  ---  Number of Interface CRC Errors
    0x07  =====  =               =  ===  == Solid State Device Statistics
    (rev 1) ==
    0x07  0x008  1               0  N--  Percentage Used Endurance Indicator
                                    |||_ C monitored condition met
                                    ||__ D supports DSN
                                    |___ N normalized value

    Pending Defects log (GP Log 0x0c) not supported

    SATA Phy Event Counters (GP Log 0x11)
    ID      Size     Value  Description
    0x0001  2            0  Command failed due to ICRC error
    0x0002  2            0  R_ERR response for data FIS
    0x0003  2            0  R_ERR response for device-to-host data FIS 0x0004  2            0  R_ERR response for host-to-device data FIS 0x0005  2            0  R_ERR response for non-data FIS
    0x0006  2            0  R_ERR response for device-to-host non-data FIS
    0x0007  2            0  R_ERR response for host-to-device non-data FIS
    0x0008  2            0  Device-to-host non-data FIS retries 0x0009  2            9  Transition from drive PhyRdy to drive PhyNRdy
    0x000a  2            4  Device-to-host register FISes sent due to a COMRESET
    0x000b  2            0  CRC errors within host-to-device FIS 0x000d  2            0  Non-CRC errors within host-to-device FIS 0x000f  2            0  R_ERR response for host-to-device data FIS, CRC
    0x0010  2            0  R_ERR response for host-to-device data FIS, non-CRC
    0x0012  2            0  R_ERR response for host-to-device non-data FIS, CRC
    0x0013  2            0  R_ERR response for host-to-device non-data FIS,
    non-CRC

    root@fireball / #


    You see any clues in there?  I'm thinking about just leaving it as the
    default tho.  It seems to work for others.  Surely mine isn't that
    unique.  lol 

    Dale

    :-)  :-) 

    P. S.  I edited the serial number parts.  ;-) 

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Knecht@21:1/5 to rdalek1967@gmail.com on Wed Apr 19 20:20:02 2023
    On Wed, Apr 19, 2023 at 10:59 AM Dale <rdalek1967@gmail.com> wrote:

    Peter Humphrey wrote:
    On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:

    With my HDD:

    # smartctl -x /dev/sda | grep -i 'sector size'
    Sector Sizes: 512 bytes logical, 4096 bytes physical
    Or, with an NVMe drive:

    # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
    Supported LBA Sizes (NSID 0x1)
    Id Fmt Data Metadt Rel_Perf
    0 + 512 0 0

    :)


    When I run that command, sdd is my SDD drive, ironic I know. Anyway, it doesn't show block sizes. It returns nothing.

    root@fireball / # smartctl -x /dev/sdd | grep -A2 'Supported LBA Sizes' root@fireball / #

    Note that all of these technologies, HDD, SDD, M.2, report different things
    and don't always report them the same way. This is an SDD in my
    Plex backup server:

    mark@science:~$ sudo smartctl -x /dev/sdb
    [sudo] password for mark:
    smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.0-69-generic] (local build) Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

    === START OF INFORMATION SECTION ===
    Model Family: Crucial/Micron Client SSDs
    Device Model: CT250MX500SSD1
    Serial Number: 1905E1E79C72
    LU WWN Device Id: 5 00a075 1e1e79c72
    Firmware Version: M3CR023
    User Capacity: 250,059,350,016 bytes [250 GB]
    Sector Sizes: 512 bytes logical, 4096 bytes physical

    In my case the physical block is 4096 bytes but
    addressable in 512 byte blocks. It appears that
    yours is 512 byte physical blocks.

    [QUOTE]
    === START OF INFORMATION SECTION ===
    Model Family: Samsung based SSDs
    Device Model: Samsung SSD 870 EVO 500GB
    Serial Number: S6PWNXXXXXXXXXXX
    LU WWN Device Id: 5 002538 XXXXXXXXXX
    Firmware Version: SVT01B6Q
    User Capacity: 500,107,862,016 bytes [500 GB]
    Sector Size: 512 bytes logical/physica
    [QUOTE]

    <div dir="ltr"><br><br>On Wed, Apr 19, 2023 at 10:59 AM Dale &lt;<a href="mailto:rdalek1967@gmail.com">rdalek1967@gmail.com</a>&gt; wrote:<br>&gt;<br>&gt; Peter Humphrey wrote:<br>&gt; &gt; On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras
    wrote:<br>&gt; &gt;<br>&gt; &gt;&gt; With my HDD:<br>&gt; &gt;&gt;<br>&gt; &gt;&gt;    # smartctl -x /dev/sda | grep -i &#39;sector size&#39;<br>&gt; &gt;&gt;    Sector Sizes:     512 bytes logical, 4096 bytes physical<br>&gt; &gt; Or, with an NVMe
    drive:<br>&gt; &gt;<br>&gt; &gt; # smartctl -x /dev/nvme1n1 | grep -A2 &#39;Supported LBA Sizes&#39;<br>&gt; &gt; Supported LBA Sizes (NSID 0x1)<br>&gt; &gt; Id Fmt  Data  Metadt  Rel_Perf<br>&gt; &gt;  0 +     512       0         0<br>&gt; &
    gt;<br>&gt; &gt; :)<br>&gt; &gt;<br>&gt;<br>&gt; When I run that command, sdd is my SDD drive, ironic I know.  Anyway, it<br>&gt; doesn&#39;t show block sizes.  It returns nothing.<br>&gt;<br>&gt; root@fireball / # smartctl -x /dev/sdd  | grep -A2 &#
    39;Supported LBA Sizes&#39;<br>&gt; root@fireball / #<div><br></div><div>Note that all of these technologies, HDD, SDD, M.2, report different things</div><div>and don&#39;t always report them the same way. This is an SDD in my </div><div>Plex backup
    server:<br><br>mark@science:~$ sudo smartctl -x /dev/sdb<br>[sudo] password for mark:  <br>smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.0-69-generic] (local build)<br>Copyright (C) 2002-20, Bruce Allen, Christian Franke, <a href="http://www.
    smartmontools.org">www.smartmontools.org</a><br><br>=== START OF INFORMATION SECTION ===<br>Model Family:     Crucial/Micron Client SSDs<br>Device Model:     CT250MX500SSD1<br>Serial Number:    1905E1E79C72<br>LU WWN Device Id: 5 00a075 1e1e79c72<
    Firmware Version: M3CR023<br>User Capacity:    250,059,350,016 bytes [250 GB]<br>Sector Sizes:     512 bytes logical, 4096 bytes physical<br></div><div><br></div><div>In my case the physical block is 4096 bytes but </div><div>addressable in 512
    byte blocks. It appears that</div><div>yours is 512 byte physical blocks.</div><div><br></div><div>[QUOTE]</div><div>=== START OF INFORMATION SECTION ===<br>Model Family:     Samsung based SSDs<br>Device Model:     Samsung SSD 870 EVO 500GB<br>
    Serial Number:    S6PWNXXXXXXXXXXX<br>LU WWN Device Id: 5 002538 XXXXXXXXXX<br>Firmware Version: SVT01B6Q<br>User Capacity:    500,107,862,016 bytes [500 GB]<br>Sector Size:      512 bytes logical/physica<br></div><div>[QUOTE]</div></div>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nikos Chantziaras@21:1/5 to Dale on Wed Apr 19 21:40:01 2023
    On 19/04/2023 22:26, Dale wrote:
    So for future reference, let it format with the default?  I'm also
    curious if when it creates the file system it will notice this and
    adjust automatically. It might.  Maybe?

    AFAIK, SSDs will internally convert to 4096 in their firmware even if
    they report a physical sector size of 512 through SMART. Just a
    compatibility thing. So formatting with 4096 is fine and gets rid of the internal conversion.

    I believe Windows always uses 4096 by default and thus it's reasonable
    to assume that most SSDs are aware of that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dale@21:1/5 to Mark Knecht on Wed Apr 19 21:30:01 2023
    This is a multi-part message in MIME format.
    Mark Knecht wrote:


    On Wed, Apr 19, 2023 at 10:59 AM Dale <rdalek1967@gmail.com <mailto:rdalek1967@gmail.com>> wrote:

    Peter Humphrey wrote:
    On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:

    With my HDD:

       # smartctl -x /dev/sda | grep -i 'sector size'
       Sector Sizes:     512 bytes logical, 4096 bytes physical
    Or, with an NVMe drive:

    # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
    Supported LBA Sizes (NSID 0x1)
    Id Fmt  Data  Metadt  Rel_Perf
     0 +     512       0         0

    :)


    When I run that command, sdd is my SDD drive, ironic I know.  Anyway, it doesn't show block sizes.  It returns nothing.

    root@fireball / # smartctl -x /dev/sdd  | grep -A2 'Supported LBA Sizes' root@fireball / #

    Note that all of these technologies, HDD, SDD, M.2, report different
    things
    and don't always report them the same way. This is an SDD in my 
    Plex backup server:

    mark@science:~$ sudo smartctl -x /dev/sdb
    [sudo] password for mark:  
    smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.0-69-generic] (local
    build)
    Copyright (C) 2002-20, Bruce Allen, Christian Franke,
    www.smartmontools.org <http://www.smartmontools.org>

    === START OF INFORMATION SECTION ===
    Model Family:     Crucial/Micron Client SSDs
    Device Model:     CT250MX500SSD1
    Serial Number:    1905E1E79C72
    LU WWN Device Id: 5 00a075 1e1e79c72
    Firmware Version: M3CR023
    User Capacity:    250,059,350,016 bytes [250 GB]
    Sector Sizes:     512 bytes logical, 4096 bytes physical

    In my case the physical block is 4096 bytes but 
    addressable in 512 byte blocks. It appears that
    yours is 512 byte physical blocks.

    [QUOTE]
    === START OF INFORMATION SECTION ===
    Model Family:     Samsung based SSDs
    Device Model:     Samsung SSD 870 EVO 500GB
    Serial Number:    S6PWNXXXXXXXXXXX
    LU WWN Device Id: 5 002538 XXXXXXXXXX
    Firmware Version: SVT01B6Q
    User Capacity:    500,107,862,016 bytes [500 GB]
    Sector Size:      512 bytes logical/physica
    [QUOTE]


    So for future reference, let it format with the default?  I'm also
    curious if when it creates the file system it will notice this and
    adjust automatically. It might.  Maybe?

    Dale

    :-)  :-) 

    P. S. Dang squirrels got in my greenhouse and dug up my seedlings. 
    Squirrel hunting is next on my agenda.  :-@

    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
    </head>
    <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">Mark Knecht wrote:<br>
    </div>
    <blockquote type="cite" cite="mid:CAK2H+eeaEt6+mqCP0Kgm1i6ZGcssDWYQ-Ace-f-J6vfcAnYa1w@mail.gmail.com">
    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
    <div dir="ltr"><br>
    <br>
    On Wed, Apr 19, 2023 at 10:59 AM Dale &lt;<a
    href="mailto:rdalek1967@gmail.com" moz-do-not-send="true">rdalek1967@gmail.com</a>&gt;
    wrote:<br>
    &gt;<br>
    &gt; Peter Humphrey wrote:<br>
    &gt; &gt; On Wednesday, 19 April 2023 09:00:33 BST Nikos
    Chantziaras wrote:<br>
    &gt; &gt;<br>
    &gt; &gt;&gt; With my HDD:<br>
    &gt; &gt;&gt;<br>
    &gt; &gt;&gt;    # smartctl -x /dev/sda | grep -i 'sector size'<br>
    &gt; &gt;&gt;    Sector Sizes:     512 bytes logical, 4096 bytes
    physical<br>
    &gt; &gt; Or, with an NVMe drive:<br>
    &gt; &gt;<br>
    &gt; &gt; # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA
    Sizes'<br>
    &gt; &gt; Supported LBA Sizes (NSID 0x1)<br>
    &gt; &gt; Id Fmt  Data  Metadt  Rel_Perf<br>
    &gt; &gt;  0 +     512       0         0<br>
    &gt; &gt;<br>
    &gt; &gt; :)<br>
    &gt; &gt;<br>
    &gt;<br>
    &gt; When I run that command, sdd is my SDD drive, ironic I
    know.  Anyway, it<br>
    &gt; doesn't show block sizes.  It returns nothing.<br>
    &gt;<br>
    &gt; root@fireball / # smartctl -x /dev/sdd  | grep -A2
    'Supported LBA Sizes'<br>
    &gt; root@fireball / #
    <div><br>
    </div>
    <div>Note that all of these technologies, HDD, SDD, M.2, report
    different things</div>
    <div>and don't always report them the same way. This is an SDD
    in my </div>
    <div>Plex backup server:<br>
    <br>
    mark@science:~$ sudo smartctl -x /dev/sdb<br>
    [sudo] password for mark:  <br>
    smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.0-69-generic]
    (local build)<br>
    Copyright (C) 2002-20, Bruce Allen, Christian Franke, <a
    href="http://www.smartmontools.org" moz-do-not-send="true">www.smartmontools.org</a><br>
    <br>
    === START OF INFORMATION SECTION ===<br>
    Model Family:     Crucial/Micron Client SSDs<br>
    Device Model:     CT250MX500SSD1<br>
    Serial Number:    1905E1E79C72<br>
    LU WWN Device Id: 5 00a075 1e1e79c72<br>
    Firmware Version: M3CR023<br>
    User Capacity:    250,059,350,016 bytes [250 GB]<br>
    Sector Sizes:     512 bytes logical, 4096 bytes physical<br>
    </div>
    <div><br>
    </div>
    <div>In my case the physical block is 4096 bytes but </div>
    <div>addressable in 512 byte blocks. It appears that</div>
    <div>yours is 512 byte physical blocks.</div>
    <div><br>
    </div>
    <div>[QUOTE]</div>
    <div>=== START OF INFORMATION SECTION ===<br>
    Model Family:     Samsung based SSDs<br>
    Device Model:     Samsung SSD 870 EVO 500GB<br>
    Serial Number:    S6PWNXXXXXXXXXXX<br>
    LU WWN Device Id: 5 002538 XXXXXXXXXX<br>
    Firmware Version: SVT01B6Q<br>
    User Capacity:    500,107,862,016 bytes [500 GB]<br>
    Sector Size:      512 bytes logical/physica<br>
    </div>
    <div>[QUOTE]</div>
    </div>
    </blockquote>
    <br>
    <br>
    So for future reference, let it format with the default?  I'm also
    curious if when it creates the file system it will notice this and
    adjust automatically. It might.  Maybe? <br>
    <br>
    Dale <br>
    <br>
    :-)  :-)  <br>
    <br>
    P. S. Dang squirrels got in my greenhouse and dug up my seedlings. 
    Squirrel hunting is next on my agenda.  :-@ <br>
    </body>
    </html>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mark Knecht@21:1/5 to realnc@gmail.com on Wed Apr 19 22:10:01 2023
    On Wed, Apr 19, 2023 at 12:39 PM Nikos Chantziaras <realnc@gmail.com> wrote:

    On 19/04/2023 22:26, Dale wrote:
    So for future reference, let it format with the default? I'm also
    curious if when it creates the file system it will notice this and
    adjust automatically. It might. Maybe?

    AFAIK, SSDs will internally convert to 4096 in their firmware even if
    they report a physical sector size of 512 through SMART. Just a
    compatibility thing. So formatting with 4096 is fine and gets rid of the internal conversion.

    I suspect this is right, or has been mostly right in the past.

    I think technically they default to the physical block size internally
    and the earlier ones, attempting to be more compatible with HDDs,
    had 4K blocks. Some of the newer chips now have 16K blocks but
    still support 512B Logical Block Addressing.

    All of these devices are essentially small computers. They have internal controllers, DRAM caches usually in the 1-2GB sort of range but getting
    larger. The bus speeds they quote is because data is moving for the most
    part in and out of cache in the drive.

    In Dale's case, if he has a 4K file system block size then it's going to
    send
    4K to the drive and the drive will write 8 512 byte writes to put it in
    flash.

    If I have the same 4K file system block size I send 4K to the drive but
    my physical block size is 4K so it's a single write cycle to get it
    into flash.

    What I *think* is true is that any time your file system block size is
    smaller than the physical block size on the storage element then
    simplistically you have the risk of write amplification.

    What I know I'm not sure about is how inodes factor into this.

    For instance:

    mark@science2:~$ ls -i
    35790149 000_NOT_BACKED_UP
    33320794 All_Files.txt
    33337840 All_Sizes_2.txt
    33337952 All_Sizes.txt
    33329818 All_Sorted.txt
    33306743 ardour_deps_install.sh
    33309917 ardour_deps_remove.sh
    33557560 Arena_Chess
    33423859 Astro_Data
    33560973 Astronomy
    33423886 Astro_science
    33307443 'Backup codes - Login.gov.pdf'
    33329080 basic-install.sh
    33558634 bin
    33561132 biosim4_functions.txt
    33316157 Boot_Config.txt
    33560975 Builder
    33338822 CFL_88_F_Bright_Syn.xsc

    If the inodes are on the disk then how are they
    stored? Does a single inode occupy a physical
    block? A 512 byte LBA? Something else?

    I have no clue.


    I believe Windows always uses 4096 by default and thus it's reasonable
    to assume that most SSDs are aware of that.


    <div dir="ltr"><br><br>On Wed, Apr 19, 2023 at 12:39 PM Nikos Chantziaras &lt;<a href="mailto:realnc@gmail.com">realnc@gmail.com</a>&gt; wrote:<br>&gt;<br>&gt; On 19/04/2023 22:26, Dale wrote:<br>&gt; &gt; So for future reference, let it format with
    the default?  I&#39;m also<br>&gt; &gt; curious if when it creates the file system it will notice this and<br>&gt; &gt; adjust automatically. It might.  Maybe?<br>&gt;<br>&gt; AFAIK, SSDs will internally convert to 4096 in their firmware even if<br>&gt;
    they report a physical sector size of 512 through SMART. Just a<br>&gt; compatibility thing. So formatting with 4096 is fine and gets rid of the<br>&gt; internal conversion.<div><br></div><div>I suspect this is right, or has been mostly right in the
    past. </div><div><br></div><div>I think technically they default to the physical block size internally</div><div>and the earlier ones, attempting to be more compatible with HDDs,</div><div>had 4K blocks. Some of the newer chips now have 16K blocks but</
    <div>still support 512B Logical Block Addressing. <br></div><div><br></div><div>All of these devices are essentially small computers. They have internal </div><div>controllers, DRAM caches usually in the 1-2GB sort of range but getting</div><div>
    larger. The bus speeds they quote is because data is moving for the most</div><div>part in and out of cache in the drive. </div><div><br></div><div>In Dale&#39;s case, if he has a 4K file system block size then it&#39;s going to send </div><div>4K to
    the drive and the drive will write 8 512 byte writes to put it in flash.</div><div><br></div><div>If I have the same 4K file system block size I send 4K to the drive but</div><div>my physical block size is 4K so it&#39;s a single write cycle to get it </
    <div>into flash.</div><div><br></div><div>What I *think* is true is that any time your file system block size is</div><div>smaller than the physical block size on the storage element then</div><div>simplistically you have the risk of write
    amplification. </div><div><br></div>What I know I&#39;m not sure about is how inodes factor into this. <br><br>For instance:<br><br>mark@science2:~$ ls -i<br>35790149  000_NOT_BACKED_UP<br>33320794  All_Files.txt<br>33337840  All_Sizes_2.txt<br>
    33337952  All_Sizes.txt<br>33329818  All_Sorted.txt<br>33306743  ardour_deps_install.sh<br>33309917  ardour_deps_remove.sh<br>33557560  Arena_Chess<br>33423859  Astro_Data<br>33560973  Astronomy<br>33423886  Astro_science<br>33307443 &#39;Backup
    codes - Login.gov.pdf&#39;<br>33329080  basic-install.sh<br>33558634  bin<br>33561132  biosim4_functions.txt<br>33316157  Boot_Config.txt<br>33560975  Builder<br>33338822  CFL_88_F_Bright_Syn.xsc<br><br>If the inodes are on the disk then how are
    they <br>stored? Does a single inode occupy a physical<br>block? A 512 byte LBA? Something else?<br><br>I have no clue.<div><span style="font-family:monospace"><br></span></div><div>&gt;<br></div><div>&gt; I believe Windows always uses 4096 by default
    and thus it&#39;s reasonable<br>&gt; to assume that most SSDs are aware of that.<br>&gt;<br></div></div>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Steinmetzger@21:1/5 to All on Thu Apr 20 00:20:02 2023
    Am Wed, Apr 19, 2023 at 01:00:33PM -0700 schrieb Mark Knecht:


    I think technically they default to the physical block size internally
    and the earlier ones, attempting to be more compatible with HDDs,
    had 4K blocks. Some of the newer chips now have 16K blocks but
    still support 512B Logical Block Addressing.

    All of these devices are essentially small computers. They have internal controllers, DRAM caches usually in the 1-2GB sort of range but getting larger.

    Actually, cheap(er) SSDs don’t have an own DRAM, but rely on the host for this. There is an ongoing debate in tech forums whether that is a bad thing
    or not. A RAM cache can help optimise writes by caching many small writes
    and aggregating them into larger blocks.

    The bus speeds they quote is because data is moving for the most
    part in and out of cache in the drive.

    Are you talking about the pseudo SLC cache? Because AFAIK the DRAM cache has no influence on read performance.

    What I know I'm not sure about is how inodes factor into this.

    For instance:

    mark@science2:~$ ls -i
    35790149 000_NOT_BACKED_UP
    33320794 All_Files.txt
    33337840 All_Sizes_2.txt
    33337952 All_Sizes.txt
    33329818 All_Sorted.txt
    33306743 ardour_deps_install.sh
    33309917 ardour_deps_remove.sh
    33557560 Arena_Chess
    33423859 Astro_Data
    33560973 Astronomy
    33423886 Astro_science
    33307443 'Backup codes - Login.gov.pdf'
    33329080 basic-install.sh
    33558634 bin
    33561132 biosim4_functions.txt
    33316157 Boot_Config.txt
    33560975 Builder
    33338822 CFL_88_F_Bright_Syn.xsc

    If the inodes are on the disk then how are they
    stored? Does a single inode occupy a physical
    block? A 512 byte LBA? Something else?

    man mkfs.ext4 says:
    […] the default inode size is 256 bytes for most file systems, except for small file systems where the inode size will be 128 bytes. […]

    And if a file is small enough, it can actually fit inside the inode itself, saving the expense of another FS sector.


    When formatting file systems, I usually lower the number of inodes from the default value to gain storage space. The default is one inode per 16 kB of
    FS size, which gives you 60 million inodes per TB. In practice, even one million per TB would be overkill in a use case like Dale’s media storage.¹ Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB, not
    counting extra control metadata and ext4 redundancies.

    The defaults are set in /etc/mke2fs.conf. It also contains some alternative values of bytes-per-inode for certain usage types. The type largefile allocates one inode per 1 MB, giving you 1 million inodes per TB of space. Since ext4 is much more efficient with inodes than ext3, it is even content with 4 MB per inode (type largefile4), giving you 250 k inodes per TB.

    For root partitions, I tend to allocate 1 million inodes, maybe some more
    for a full Gentoo-based desktop due to the portage tree’s sheer number of small files. My Surface Go’s root (Arch linux, KDE and some texlive) uses 500 k right now.


    ¹ Assuming one inode equals one directory or unfragmented file on ext4.
    I’m not sure what the allocation size limit for one inode is, but it is *very* large. Ext3 had a rather low limit, which is why it was so slow with big files. But that was one of the big improvements in ext4’s extended inodes, at the cost of double inode size to house the required metadata.

    --
    Grüße | Greetings | Qapla’
    Please do not share anything from, with or about me on any social network.

    FINE: Tax for doing wrong. TAX: Fine for doing fine.

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmRAZ58ACgkQizG+tUDU MMpLRRAAwNRWIeUEQwb061jgRLS/lQdjlq9Gd+A8mxFAQ2dwPH3LN2MG53H8JOtk VvlXBQKiLEp1JSfL+fsAmW4SOEhDk7vgGoXRsGU6JaYGnAQPiFjxxG82RvW5/mYU O27i3sDcV0I8AbbMWKO3n2PX76FC5AOko3X0/jTRRtqCknieVuj1II0Kr6C6gd8R huXjjvh1+Aft9np0JmmvHBWwZv9kP/75wN078rP96TIQvlR80+zvTFOHcdXI3oCC 2LOER7Ht9MMZTkNUThuVz/zj1/H+/CwcHlPEDJ2mL1sebtnFQwvzaE+1N7LSvVjH 88KP+p1hewZjCSYpcFSv4jVouFo/B0ae+MP3JUm1NVBbRm4Wrc+5Gy89cDLa06SV 8L0cg8O7BOlDO5Wpy7bu8znNyRqGV3bt8uLo537YMNAt2/tdAsBE6hIMetMnEcPi KPbBwhNRDM8dWVeoijsVKI5ctAYwB7oTfCz8TUGaxT86JaMJXxACiY7n6IqGNJay lwOE//lx0PllgCLYkOkyizXQd9S64FOut+CsDBtZUJ/VbdSUPg/evEW2gnWHFKa4 ShqG949HZCS0Ms4kfBdYCsXc71cyCiPOzftr4H5MXvdSD6B2Cn4meg9Ki4MzuUN9 R6yEtwBnQ5w062UPIleZVDqbt0dfJMPDS7pfgtj9hFiyuu9DJ+4=
    =G1Vz
    -----END PGP SIGNATURE-----

    -
  • From Mark Knecht@21:1/5 to All on Thu Apr 20 03:10:01 2023
    I wonder. Is there a way to find out the smallest size file in a
    directory or sub directory, largest files, then maybe a average file
    size??? I thought about du but given the number of files I have here, it
    would be a really HUGE list of files. Could take hours or more too. This
    is what KDE properties shows.

    I'm sure there are more accurate ways but

    sudo ls -R / | wc

    give you the number of lines returned from the ls command. It's not perfect
    as there are blank lines in the ls but it's a start.

    My desktop machine has about 2.2M files.

    Again, there are going to be folks who can tell you how to remove blank
    lines and other cruft but it's a start.

    Only takes a minute to run on my Ryzen 9 5950X. YMMV.

    <div dir="ltr"><br>&gt; I wonder.  Is there a way to find out the smallest size file in a directory or sub directory, largest files, then maybe a average file size???  I thought about du but given the number of files I have here, it would be a really
    HUGE list of files.  Could take hours or more too.  This is what KDE properties shows.<div><br><div class="gmail_quote"><div>I&#39;m sure there are more accurate ways but </div><div><br></div><div>sudo ls -R / | wc</div><div><br></div><div>give you
    the number of lines returned from the ls command. It&#39;s not perfect as there are blank lines in the ls but it&#39;s a start.</div><div><br></div><div>My desktop machine has about 2.2M files.</div><div><br></div><div>Again, there are going to be folks
    who can tell you how to remove blank lines and other cruft but it&#39;s a start.</div><div><br></div><div>Only takes a minute to run on my Ryzen 9 5950X. YMMV.</div><div><br></div></div></div></div>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From eric@21:1/5 to Dale on Thu Apr 20 06:50:01 2023
    On 4/19/23 21:23, Dale wrote:
    Mark Knecht wrote:

    I wonder.  Is there a way to find out the smallest size file in a
    directory or sub directory, largest files, then maybe a average file
    size???  I thought about du but given the number of files I have here,
    it would be a really HUGE list of files. Could take hours or more
    too.  This is what KDE properties shows.

    I'm sure there are more accurate ways but

    sudo ls -R / | wc

    give you the number of lines returned from the ls command. It's not
    perfect as there are blank lines in the ls but it's a start.

    My desktop machine has about 2.2M files.

    Again, there are going to be folks who can tell you how to remove
    blank lines and other cruft but it's a start.

    Only takes a minute to run on my Ryzen 9 5950X. YMMV.


    I did a right click on the directory in Dolphin and selected
    properties.  It told me there is a little over 55,000 files.  Some 1,100 directories, not sure if directories use inodes or not. Basically, there
    is a little over 56,000 somethings on that file system.  I was curious
    what the smallest file is and the largest. No idea how to find that
    really.  Even du separates by directory not individual files regardless
    of directory.  At least the way I use it anyway.

    If I ever have to move things around again, I'll likely start a thread
    just for figuring out the setting for inodes.  I'll likely know more
    about the number of files too.

    Dale

    :-)  :-)

    If you do not mind using graphical solutions, Filelight can help you
    easily visualize where your largest directories and files are residing.

    https://packages.gentoo.org/packages/kde-apps/filelight

    Visualise disk usage with interactive map of concentric, segmented rings

    Eric

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Steinmetzger@21:1/5 to All on Thu Apr 20 11:00:01 2023
    Am Wed, Apr 19, 2023 at 06:09:15PM -0700 schrieb Mark Knecht:
    I wonder. Is there a way to find out the smallest size file in a
    directory or sub directory, largest files, then maybe a average file
    size??? I thought about du but given the number of files I have here, it would be a really HUGE list of files. Could take hours or more too. This
    is what KDE properties shows.

    I'm sure there are more accurate ways but

    sudo ls -R / | wc

    Number of directories (not accounting for symlinks):
    find -type d | wc -l

    Number of files (not accounting for symlinks):
    find -type f | wc -l

    give you the number of lines returned from the ls command. It's not perfect as there are blank lines in the ls but it's a start.

    My desktop machine has about 2.2M files.

    Again, there are going to be folks who can tell you how to remove blank
    lines and other cruft but it's a start.

    Or not produce them in the first place. ;-)

    Only takes a minute to run on my Ryzen 9 5950X. YMMV.

    It’s not a question of the processor, but of the storage device. And if your cache, because the second run will probably not use the device at all.

    --
    Grüße | Greetings | Qapla’
    Please do not share anything from, with or about me on any social network.

    Bosses are like timpani: the more hollow they are, the louder they sound.

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmRA/fEACgkQizG+tUDU MMrM0w//VmI7eatLmKg3OswFnAPE/VEH/2vMec2p7WSVL/bXaGt5t2GcO5G4fRno AUmZ6ki65fGqTbKbqQuQsRACLn0gnyICfNw5AG44ua8IKtLQ6GHmcepTaNhL61U/ GqG0cPPhjCLUWOlR2nLBrwDYQLjhcclMf8jIpfDwSp9BjQKw79oAiXvH69aoxeda PwEh+Lr5rZ/lJIph+t/nGcBt1cvnyfFlnrW8F9SevifDejoyGQL0RP7a6VCv9qOk fKcYuEUrwgoxpCzNuhduiu/s/V1m88Ab7D6piLo+lWHv14tfR8x0vGPB8ulwU8U1 XD2qqzAHvwTSwJAXDYf0p+CfELes67IfVST2+4wM1GBAQz8dUoX3Ed3ljM1TD4NI +Qjcz31957nc3nVspv3ki4r5vAO6cVdvBAPG7OMoah1Q06vSTV/0kz0mCq3ZSR87 lRCzm7kBE2gyHW8ozQwmWE+0vvcueQ9uT7T+Tt+mk0G1Y9vOxUT1nDFiHmljejHr qnPR3S+TlvD0ui2QjM/vl1QZ48D8dvhcoyNKMs+FwrVUmSWmW2OUSfOD0lt8Ndmi q9FTT+x9lBcTFuOOv+YdOsDEeZ9SaY6nVPwtQLZVq1FbrcrsMtvLZt5QjH0GBn7J b1IXMgXv29jqQ86NS51N+KAIkCRiqAVX7SWN0ik0HWgRIQL7gi8=
    =Sfp0
    -----END PGP
  • From Frank Steinmetzger@21:1/5 to I remember from yesterday that the on Thu Apr 20 11:00:02 2023
    Am Wed, Apr 19, 2023 at 06:32:45PM -0500 schrieb Dale:
    Frank Steinmetzger wrote:
    <<<SNIP>>>

    When formatting file systems, I usually lower the number of inodes from the
    default value to gain storage space. The default is one inode per 16 kB of FS size, which gives you 60 million inodes per TB. In practice, even one million per TB would be overkill in a use case like Dale’s media storage.¹
    Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB, not
    counting extra control metadata and ext4 redundancies.

    If I ever rearrange my
    drives again and can change the file system, I may reduce the inodes at
    least on the ones I only have large files on.  Still tho, given I use
    LVM and all, maybe that isn't a great idea.  As I add drives with LVM, I assume it increases the inodes as well.

    I remember from yesterday that the manpage says that inodes are added according to the bytes-per-inode value.

    I wonder.  Is there a way to find out the smallest size file in a
    directory or sub directory, largest files, then maybe a average file
    size???

    The 20 smallest:
    `find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20`

    The 20 largest: either use tail instead of head or reverse sorting with -r.
    You can also first pipe the output of stat into a file so you can sort and analyse the list more efficiently, including calculating averages.

    I thought about du but given the number of files I have here,
    it would be a really HUGE list of files.  Could take hours or more too. 

    I use a “cache” of text files with file listings of all my external drives.
    This allows me to glance over my entire data storage without having to plug
    in any drive. It uses tree underneath to get the list:

    `tree -afx -DFins --dirsfirst --du --timefmt "%Y-%m-%d %T"`

    This gives me a list of all directories and files, with their full path,
    date and size information and accumulated directory size in a concise
    format. Add -pug to also include permissions.

    --
    Grüße | Greetings | Qapla’
    Please do not share anything from, with or about me on any social network.

    Computers are the most congenial product of human laziness to-date.

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmRA/T0ACgkQizG+tUDU MMrYUg/+JO49psZ+PCQEDPmhlrtz1LjqFchkrt8coeNpTdIZj+wtpeKrwCVjuEF6 t2V9oncU2crvdvc/RWiPeaAmvnlgUQVyLGbyxKiIcDHK/0JJ3VnNpnj1Edp2bZyE 71CY2QBQCO+l6xnyPhTer+nB9lPV44sal1f7n6j/r11EPV+ZYg8rBHvkSix2fh+V lEPECnbFGQOwgoZl01U3XkhxXtGpyDQTfqRQPheJllDELXDZhJlJeJGZo2kFS7A6 pNSLQgupX0TwmtYjp0s5cI5LrgfjhHRZDdYsHNopEvv1CaFLIfGxqVTB3PzfnRMZ TXOleg/attqcWN6taIcb3fIgUUmxWKf7PzA2X/aX9MNrw3GAdf+vGYUI2oUFpvQc 7gbZP7B7/9hQMMQMMUmepdhs88loR60WW8s/SpqZ2YlRK41y7Ibf8jfRMqmhHaHK yAS517hcuxPkfcCFGuDHMgQsJsVhzNGv83ETLd78CbP31QgnVuwCaofXyHQvyjBJ srkFoiJkXZoXcV0JeefoKymkiCkj25EWg2m3wpnZiQVtAye5Y7WyBCQ+60N45Q90 w15PwLyu3aA9gMcwcAZdIoNL7fRFEVKv0gFaIjXsDJcLXYT2kBMn1qDDKW5RqLvD cISgy03qz4Azv9UY5C68Esuwzk1sZ2R8ZaV6Pj3FPBJ6DX0j6r4=
    =F7BT
    -----END PGP SIGNA
  • From Peter Humphrey@21:1/5 to All on Thu Apr 20 12:10:01 2023
    On Thursday, 20 April 2023 10:29:59 BST Dale wrote:
    Frank Steinmetzger wrote:
    Am Wed, Apr 19, 2023 at 06:32:45PM -0500 schrieb Dale:
    Frank Steinmetzger wrote:
    <<<SNIP>>>

    When formatting file systems, I usually lower the number of inodes from >>> the
    default value to gain storage space. The default is one inode per 16 kB >>> of
    FS size, which gives you 60 million inodes per TB. In practice, even one >>> million per TB would be overkill in a use case like Dale’s media
    storage.¹
    Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB,
    not counting extra control metadata and ext4 redundancies.

    If I ever rearrange my
    drives again and can change the file system, I may reduce the inodes at
    least on the ones I only have large files on. Still tho, given I use
    LVM and all, maybe that isn't a great idea. As I add drives with LVM, I >> assume it increases the inodes as well.

    I remember from yesterday that the manpage says that inodes are added according to the bytes-per-inode value.

    I wonder. Is there a way to find out the smallest size file in a
    directory or sub directory, largest files, then maybe a average file
    size???

    The 20 smallest:
    `find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20`

    The 20 largest: either use tail instead of head or reverse sorting with
    -r.
    You can also first pipe the output of stat into a file so you can sort and analyse the list more efficiently, including calculating averages.

    When I first run this while in / itself, it occurred to me that it
    doesn't specify what directory. I thought maybe changing to the
    directory I want it to look at would work but get this:


    root@fireball /home/dale/Desktop/Crypt # `find -type f -print0 | xargs
    -0 stat -c '%s %n' | sort -n | head -n 20`
    -bash: 2: command not found
    root@fireball /home/dale/Desktop/Crypt #


    It works if I'm in the / directory but not when I'm cd'd to the
    directory I want to know about. I don't see a spot to change it. Ideas.

    In place of "find -type..." say "find / -type..."

    --
    Regards,
    Peter.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dale@21:1/5 to Peter Humphrey on Thu Apr 20 11:50:01 2023
    Peter Humphrey wrote:
    On Wednesday, 19 April 2023 18:59:26 BST Dale wrote:
    Peter Humphrey wrote:
    On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
    With my HDD:
    # smartctl -x /dev/sda | grep -i 'sector size'
    Sector Sizes: 512 bytes logical, 4096 bytes physical
    Or, with an NVMe drive:

    # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
    Supported LBA Sizes (NSID 0x1)
    Id Fmt Data Metadt Rel_Perf

    0 + 512 0 0

    :)
    When I run that command, sdd is my SDD drive, ironic I know. Anyway, it
    doesn't show block sizes. It returns nothing.
    I did say it was for an NVMe drive, Dale. If your drive was one of those, the kernel would have named it /dev/nvme0n1 or similar.


    Well, I was hoping it would work on all SDD type drives.  ;-) 

    Dale

    :-)  :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter Humphrey@21:1/5 to All on Thu Apr 20 11:50:01 2023
    On Wednesday, 19 April 2023 18:59:26 BST Dale wrote:
    Peter Humphrey wrote:
    On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
    With my HDD:
    # smartctl -x /dev/sda | grep -i 'sector size'
    Sector Sizes: 512 bytes logical, 4096 bytes physical

    Or, with an NVMe drive:

    # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
    Supported LBA Sizes (NSID 0x1)
    Id Fmt Data Metadt Rel_Perf

    0 + 512 0 0

    :)

    When I run that command, sdd is my SDD drive, ironic I know. Anyway, it doesn't show block sizes. It returns nothing.

    I did say it was for an NVMe drive, Dale. If your drive was one of those, the kernel would have named it /dev/nvme0n1 or similar.

    --
    Regards,
    Peter.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dale@21:1/5 to Peter Humphrey on Thu Apr 20 13:00:01 2023
    Peter Humphrey wrote:
    On Thursday, 20 April 2023 10:29:59 BST Dale wrote:
    Frank Steinmetzger wrote:
    Am Wed, Apr 19, 2023 at 06:32:45PM -0500 schrieb Dale:
    Frank Steinmetzger wrote:
    <<<SNIP>>>

    When formatting file systems, I usually lower the number of inodes from >>>>> the
    default value to gain storage space. The default is one inode per 16 kB >>>>> of
    FS size, which gives you 60 million inodes per TB. In practice, even one >>>>> million per TB would be overkill in a use case like Dale’s media
    storage.¹
    Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB,
    not counting extra control metadata and ext4 redundancies.
    If I ever rearrange my
    drives again and can change the file system, I may reduce the inodes at >>>> least on the ones I only have large files on. Still tho, given I use
    LVM and all, maybe that isn't a great idea. As I add drives with LVM, I >>>> assume it increases the inodes as well.
    I remember from yesterday that the manpage says that inodes are added
    according to the bytes-per-inode value.

    I wonder. Is there a way to find out the smallest size file in a
    directory or sub directory, largest files, then maybe a average file
    size???
    The 20 smallest:
    `find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20` >>>
    The 20 largest: either use tail instead of head or reverse sorting with
    -r.
    You can also first pipe the output of stat into a file so you can sort and >>> analyse the list more efficiently, including calculating averages.
    When I first run this while in / itself, it occurred to me that it
    doesn't specify what directory. I thought maybe changing to the
    directory I want it to look at would work but get this:


    root@fireball /home/dale/Desktop/Crypt # `find -type f -print0 | xargs
    -0 stat -c '%s %n' | sort -n | head -n 20`
    -bash: 2: command not found
    root@fireball /home/dale/Desktop/Crypt #


    It works if I'm in the / directory but not when I'm cd'd to the
    directory I want to know about. I don't see a spot to change it. Ideas.
    In place of "find -type..." say "find / -type..."



    Ahhh, that worked.  I also realized I need to leave off the ' at the
    beginning and end.  I thought I left those out.  I copy and paste a
    lot.  lol 

    It only took a couple dozen files to start getting up to some size. 
    Most of the few small files are text files with little notes about a
    video.  For example, if building something I will create a text file
    that lists what is needed to build what is in the video.  Other than a
    few of those, file size reaches a few 100MBs pretty quick.  So, the
    number of small files is pretty small.  That is good to know. 

    Thanks for the command.  I never was good with xargs, sed and such.  It
    took me a while to get used to grep.  ROFL 

    Dale

    :-)  :-) 

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Frank Steinmetzger@21:1/5 to All on Thu Apr 20 14:30:01 2023
    Am Thu, Apr 20, 2023 at 04:29:59AM -0500 schrieb Dale:

    I wonder.  Is there a way to find out the smallest size file in a
    directory or sub directory, largest files, then maybe a average file
    size???
    The 20 smallest:
    `find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20`

    The 20 largest: either use tail instead of head or reverse sorting with -r. You can also first pipe the output of stat into a file so you can sort and analyse the list more efficiently, including calculating averages.

    When I first run this while in / itself, it occurred to me that it
    doesn't specify what directory.  I thought maybe changing to the
    directory I want it to look at would work but get this: 

    Yeah, either cd into the directory first, or pass it to find. But it’s like tar: I can never remember in which order I need to feed stuff to find. One relevant addition could be -xdev, to have find halt at file system
    boundaries. So:

    find /path/to/dir -xdev -type f -! -type l …

    root@fireball /home/dale/Desktop/Crypt # `find -type f -print0 | xargs
    -0 stat -c '%s %n' | sort -n | head -n 20`
    -bash: 2: command not found
    root@fireball /home/dale/Desktop/Crypt #

    I used the `` in the mail text as a kind of hint: “everything between is a command”. So when you paste that into the terminal, it is executed, and the result of it is substituted. Meaning: the command’s output is taken as the new input and executed. And since the first word of the output was “2”, you
    get that error message. Sorry about the confusion.

    I thought about du but given the number of files I have here,
    it would be a really HUGE list of files.  Could take hours or more too. 
    I use a “cache” of text files with file listings of all my external drives.
    This allows me to glance over my entire data storage without having to plug
    in any drive. It uses tree underneath to get the list:

    `tree -afx -DFins --dirsfirst --du --timefmt "%Y-%m-%d %T"`

    This gives me a list of all directories and files, with their full path, date and size information and accumulated directory size in a concise format. Add -pug to also include permissions.


    Save this for later use.  ;-)

    I built a wrapper script around it, to which I pass the directory I want to read (usually the root of a removable media). The script creates a new text file, with the current date and the dircetory in its name, and compresses it at the end. This allows me to diff those files in vim and see what changed over time. It also updates a symlink to the current version for quick access via bash alias.

    --
    Grüße | Greetings | Qapla’
    Please do not share anything from, with or about me on any social network.

    ...llaw eht no rorrim ,rorriM

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEEVbE9o2D2lE5fhoVsizG+tUDUMMoFAmRBLsgACgkQizG+tUDU MMpUYA//dwh+UG5MOKrmRB/8g/SOgnz5ZRPUw7v3nRF11Z/E8AFaKDwa0PbsllBS paMaR+1qTcXyZv9Rf9XyA1ObQuZgAxdhvtemwH+Cyl5uupYr7GVag+JWUlmCWRri 5CaudciW1Ji8TxHB3/WLtJA1yOFlGL2v2JKWXomgn4q5NlVW4Wg+BKELg+IMlMk2 jJtiNaLblbDuxQrWxLoXg1yCrK7YgF7+sz1uTQW5JcME/9S4WHZYA++vUbAe+EC7 zun55o8Buh5b29GV6zjGUR1sK695alAs0W5T+4PXqvAv5nfucwUgIn+D3u1RBoe1 jd+wLkc+t4jKv9tVESSNjgZ8Nnx/3+CelsUX2P43RuctU2hEDOYxkHoeDeR+EK+A s+ExPuUDmx4Gc9Z85Z+wCf2r5kOuQfjVF9P9LvY4IHcYCIi84sCSuNTlD6JvzEVz R1w5R9Nt9EQ6GPYcOP4WU5xTR7zLdTzq9wrjz7+lb90AykHvArmTk9c3Y1bicWob K3fw6eLBzMLKfUVaOY1iZmLg8BWCi0a8Uwm886fUjnRrKz77sMhx4lpulMqYIZfx IPXCRTmSSDzjaVQzwl4qVKq0gc7Qhh8duac09jNYAMYAvTQF7r6nhedASJLTFHxa fRDNn3eB50ahVQF3r5ZiqxkL4K5492yCqDd4lbcPN8003r5IfJo=
    =gWEV
    -----END PGP SIGNATURE-----

    --- SoupGate-Win32 v1.05
  • From Nikos Chantziaras@21:1/5 to Dale on Thu Apr 20 15:30:01 2023
    On 20/04/2023 13:59, Dale wrote:
    In place of "find -type..." say "find / -type..."

    Ahhh, that worked.  I also realized I need to leave off the ' at the beginning and end.  I thought I left those out.  I copy and paste a
    lot.  lol

    Btw, if you only want to do this for the root filesystem and exclude all
    other mounted filesystems, also use the -xdev option:

    find / -xdev -type ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Wol@21:1/5 to Dale on Fri Apr 21 01:10:01 2023
    On 20/04/2023 05:23, Dale wrote:
    Some 1,100 directories, not sure if directories use inodes or not.

    "Everything is a file".

    A directory is just a data file with a certain structure that maps names
    to inodes.

    It might still be there somewhere - I can't imagine it's been deleted,
    just forgotten - but I believe some editors (emacs probably) would let
    you open that file, so you could rename files by editing the line that
    defined them, you could unlink a file by deleting the line, etc etc.

    Obviously a very dangerous mode, but Unix was always happy about handing
    out powerful footguns willy nilly.

    Cheers,
    Wol

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)