On 16/04/2023 01:47, Dale wrote:
Anything else that makes these special? Any tips or tricks?
Only three things.
1. Make sure the fstrim service is active (should run every week by
default, at least with systemd, "systemctl enable fstrim.timer".)
2. Don't use the "discard" mount option.
3. Use smartctl to keep track of TBW.
People are always mentioning performance, but it's not the important
factor for me. The more important factor is longevity. You want your
storage device to last as long as possible, and fstrim helps, discard
hurts.
With "smartctl -x /dev/sda" (or whatever device your SSD is in /dev)
pay attention to the "Data Units Written" field. Your 500GB 870 Evo
has a TBW of 300TBW. That's "terabytes written". This is the
manufacturer's "guarantee" that the device won't fail prior to writing
that many terabytes to it. When you reach that, it doesn't mean it
will fail, but it does mean you might want to start thinking of
replacing it with a new one just in case, and then keep using it as a secondary drive.
If you use KDE, you can also view that SMART data in the "SMART
Status" UI (just type "SMART status" in the KDE application launcher.)
Anything else that makes these special? Any tips or tricks?
I compile on a spinning rust
drive and use -k to install the built packages on the live system. That should help minimize the writes.
Since I still need a spinning rust
drive for swap and such, I thought about putting /var on spinning rust.
On 16/04/2023 01:47, Dale wrote:hurts.
Anything else that makes these special? Any tips or tricks?
Only three things.
1. Make sure the fstrim service is active (should run every week by
default, at least with systemd, "systemctl enable fstrim.timer".)
2. Don't use the "discard" mount option.
3. Use smartctl to keep track of TBW.
People are always mentioning performance, but it's not the important
factor for me. The more important factor is longevity. You want your
storage device to last as long as possible, and fstrim helps, discard
With "smartctl -x /dev/sda" (or whatever device your SSD is in /dev) pay attention to the "Data Units Written" field. Your 500GB 870 Evo has a
TBW of 300TBW. That's "terabytes written". This is the manufacturer's "guarantee" that the device won't fail prior to writing that many
terabytes to it. When you reach that, it doesn't mean it will fail, but
it does mean you might want to start thinking of replacing it with a new
one just in case, and then keep using it as a secondary drive.
If you use KDE, you can also view that SMART data in the "SMART Status"
UI (just type "SMART status" in the KDE application launcher.)
On 18/04/2023 18:05, Dale wrote:
I compile on a spinning rust
drive and use -k to install the built packages on the live system. That
should help minimize the writes.
I just use tmpfs for /var/tmp/portage (16GB, I'm on 32GB RAM.) When I
keep binary packages around, those I have on my HDD, as well as the distfiles:
DISTDIR="/mnt/Data/gentoo/distfiles"
PKGDIR="/mnt/Data/gentoo/binpkgs"
Since I still need a spinning rust
drive for swap and such, I thought about putting /var on spinning rust.
Nah. The data written there is absolutely minuscule. Firefox writes
like 10 times more just while running it without even any web page
loaded... And for actual browsing, it becomes more like 1000 times
more (mostly the Firefox cache.)
I wouldn't worry too much about it. I've been using my current SSD
since 2020, and I'm at 7TBW right now (out of 200 the drive is rated
for) and I dual boot Windows and install/uninstall large games on it
quite often. So with an average of 3TBW per year, I'd need over 80
years to reach 200TBW :-P But I mentioned it in case your use case is different (like large video files or recording and whatnot.)
.
I just use tmpfs for /var/tmp/portage (16GB, I'm on 32GB RAM.) When IMost of mine is in tmpfs to except for the larger packages, such as
keep binary packages around, those I have on my HDD, as well as the
distfiles:
DISTDIR="/mnt/Data/gentoo/distfiles"
PKGDIR="/mnt/Data/gentoo/binpkgs"
Firefox, LOo and a couple others. Thing is, those few large ones would
rack up a lot of writes themselves since they are so large. That said,
it would be faster. 😉
Someone mentioned 16K block size.<SNIP>
Mark Knecht wrote:
On Tue, Apr 18, 2023 at 1:02 PM Dale <rdalek1967@gmail.com> wrote:actually spits out a error message and doesn't create the file system. I wasn't sure if that info was outdated or what so I thought I'd ask. I
<SNIP>
Someone mentioned 16K block size.<SNIP>
I mentioned it but I'm NOT suggesting it.
It would be the -b option if you were to do it for ext4.
I'm using the default block size (4k) on all my SSDs and M.2's and
as I've said a couple of time, I'm going to blast past the 5 year
warranty time long before I write too many terabytes.
Keep it simple.
- Mark
One reason I ask, some info I found claimed it isn't even supported. It
Dale<SNIP>
On Tue, Apr 18, 2023 at 1:02 PM Dale <rdalek1967@gmail.com <mailto:rdalek1967@gmail.com>> wrote:
<SNIP>
Someone mentioned 16K block size.<SNIP>
I mentioned it but I'm NOT suggesting it.
It would be the -b option if you were to do it for ext4.
I'm using the default block size (4k) on all my SSDs and M.2's and
as I've said a couple of time, I'm going to blast past the 5 year warranty time long before I write too many terabytes.
Keep it simple.
- Mark
Given how I plan to use this drive, that should last a long time. I'm
just putting the OS stuff on the drive and I compile on a spinning rust
drive and use -k to install the built packages on the live system. That should help minimize the writes.
I read about that bytes written. With the way you explained it, it
confirms what I was thinking it meant. That's a lot of data. I
currently have around 100TBs of drives lurking about, either in my rig
or for backups. I'd have to write three times that amount of data on
that little drive. That's a LOT of data for a 500GB drive.
If you use ext4, run `dumpe2fs -h /dev/your-root-partition | grep Lifetime` to see how much data has been written to that partition since you formatted it. Just to get an idea of what you are looking at on your setup.
/var/tmp/portage on tmpfs. And on every disk I allocate a swap partition
equal to twice the mobo's max memory. Three drives times 64GB times two is a >> helluva lot of swap.
Uhm … why? The moniker of swap = 2×RAM comes from times when RAM was scarce.
What do you need so much swap for, especially with 32 GB RAM to begin with? And if you really do have use cases which cause regular swapping, it’d be less painful if you just added some more RAM.
Am Tue, Apr 18, 2023 at 10:05:27AM -0500 schrieb Dale:
Given how I plan to use this drive, that should last a long time. I'mWell, 300 TB over 5 years is 60 TB per year, or 165 GB per day. Every day. I’d say don’t worry. Besides: endurance tests showed that SSDs were able to
just putting the OS stuff on the drive and I compile on a spinning rust
drive and use -k to install the built packages on the live system. That
should help minimize the writes.
withstand multiples of their guaranteed TBW until they actually failed (of course there are always exceptions to the rule).
I read about that bytes written. With the way you explained it, itIf you use ext4, run `dumpe2fs -h /dev/your-root-partition | grep Lifetime` to see how much data has been written to that partition since you formatted it. Just to get an idea of what you are looking at on your setup.
confirms what I was thinking it meant. That's a lot of data. I
currently have around 100TBs of drives lurking about, either in my rig
or for backups. I'd have to write three times that amount of data on
that little drive. That's a LOT of data for a 500GB drive.
Filesystem created: Sun Apr 15 03:24:56 2012
Lifetime writes: 993 GB
That's for the main / partition. I have /usr on it's own partition tho.
Filesystem created: Sun Apr 15 03:25:48 2012
Lifetime writes: 1063 GB
I'd think that / and /usr would be the most changed parts of the OS.
After all, /bin and /sbin are on / too as is /lib*. If that is even remotely correct, both would only be around 2TBs. That dang thing may outlive me even if I don't try to minimize writes. ROFLMBO
On 19/04/2023 04:45, Dale wrote:
Filesystem created: Sun Apr 15 03:24:56 2012
Lifetime writes: 993 GB
That's for the main / partition. I have /usr on it's own partition tho.
Filesystem created: Sun Apr 15 03:25:48 2012
Lifetime writes: 1063 GB
I'd think that / and /usr would be the most changed parts of the OS.
After all, /bin and /sbin are on / too as is /lib*. If that is even
remotely correct, both would only be around 2TBs. That dang thing may
outlive me even if I don't try to minimize writes. ROFLMBO
I believe this only shows the lifetime writes to that particular
filesystem since it's been created?
You can use smartctl here too. At least on my HDD, the HDD's firmware
keeps tracks of the lifetime logical sectors written. Logical sectors
are 512 bytes (physical are 4096). The logical sector size is also
shown by smartctl.
With my HDD:
# smartctl -x /dev/sda | grep -i 'sector size'
Sector Sizes: 512 bytes logical, 4096 bytes physical
Then to get the total logical sectors written:
# smartctl -x /dev/sda | grep -i 'sectors written'
0x01 0x018 6 37989289142 --- Logical Sectors Written
Converting that to terabytes written with "bc -l":
37988855446 * 512 / 1024^4
17.68993933033198118209
Almost 18TB.
With my HDD:
# smartctl -x /dev/sda | grep -i 'sector size'
Sector Sizes: 512 bytes logical, 4096 bytes physical
On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
With my HDD:
# smartctl -x /dev/sda | grep -i 'sector size'
Sector Sizes: 512 bytes logical, 4096 bytes physical
Or, with an NVMe drive:
# smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 + 512 0 0
Id Fmt Data Metadt Rel_Perf<br>0 + 512 0 1<br>1 - 4096 0 0<br><br>Data Units Read: 28,823,498 [14.7 TB]<br>Data Units Written: 28,560,888 [14.6 TB]<br>Host Read Commands: 137,865,594<br>Host Write Commands: 209,406,594<br><br>Error Information (NVMe Log 0x01, 16 of 16 entries)<br>Num ErrCount SQId CmdId Status PELoc
On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
With my HDD:Or, with an NVMe drive:
# smartctl -x /dev/sda | grep -i 'sector size'
Sector Sizes: 512 bytes logical, 4096 bytes physical
# smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 + 512 0 0
:)
Peter Humphrey wrote:
On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
With my HDD:Or, with an NVMe drive:
# smartctl -x /dev/sda | grep -i 'sector size'
Sector Sizes: 512 bytes logical, 4096 bytes physical
# smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 + 512 0 0
:)
When I run that command, sdd is my SDD drive, ironic I know. Anyway, it doesn't show block sizes. It returns nothing.
root@fireball / # smartctl -x /dev/sdd | grep -A2 'Supported LBA Sizes' root@fireball / #
Firmware Version: M3CR023<br>User Capacity: 250,059,350,016 bytes [250 GB]<br>Sector Sizes: 512 bytes logical, 4096 bytes physical<br></div><div><br></div><div>In my case the physical block is 4096 bytes but </div><div>addressable in 512byte blocks. It appears that</div><div>yours is 512 byte physical blocks.</div><div><br></div><div>[QUOTE]</div><div>=== START OF INFORMATION SECTION ===<br>Model Family: Samsung based SSDs<br>Device Model: Samsung SSD 870 EVO 500GB<br>
So for future reference, let it format with the default? I'm also
curious if when it creates the file system it will notice this and
adjust automatically. It might. Maybe?
On Wed, Apr 19, 2023 at 10:59 AM Dale <rdalek1967@gmail.com <mailto:rdalek1967@gmail.com>> wrote:
Peter Humphrey wrote:
On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
With my HDD:Or, with an NVMe drive:
# smartctl -x /dev/sda | grep -i 'sector size'
Sector Sizes: 512 bytes logical, 4096 bytes physical
# smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 + 512 0 0
:)
When I run that command, sdd is my SDD drive, ironic I know. Anyway, it doesn't show block sizes. It returns nothing.
root@fireball / # smartctl -x /dev/sdd | grep -A2 'Supported LBA Sizes' root@fireball / #
Note that all of these technologies, HDD, SDD, M.2, report different
things
and don't always report them the same way. This is an SDD in my
Plex backup server:
mark@science:~$ sudo smartctl -x /dev/sdb
[sudo] password for mark:
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.0-69-generic] (local
build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke,
www.smartmontools.org <http://www.smartmontools.org>
=== START OF INFORMATION SECTION ===
Model Family: Crucial/Micron Client SSDs
Device Model: CT250MX500SSD1
Serial Number: 1905E1E79C72
LU WWN Device Id: 5 00a075 1e1e79c72
Firmware Version: M3CR023
User Capacity: 250,059,350,016 bytes [250 GB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
In my case the physical block is 4096 bytes but
addressable in 512 byte blocks. It appears that
yours is 512 byte physical blocks.
[QUOTE]
=== START OF INFORMATION SECTION ===
Model Family: Samsung based SSDs
Device Model: Samsung SSD 870 EVO 500GB
Serial Number: S6PWNXXXXXXXXXXX
LU WWN Device Id: 5 002538 XXXXXXXXXX
Firmware Version: SVT01B6Q
User Capacity: 500,107,862,016 bytes [500 GB]
Sector Size: 512 bytes logical/physica
[QUOTE]
On 19/04/2023 22:26, Dale wrote:
So for future reference, let it format with the default? I'm also
curious if when it creates the file system it will notice this and
adjust automatically. It might. Maybe?
AFAIK, SSDs will internally convert to 4096 in their firmware even if
they report a physical sector size of 512 through SMART. Just a
compatibility thing. So formatting with 4096 is fine and gets rid of the internal conversion.
I believe Windows always uses 4096 by default and thus it's reasonable
to assume that most SSDs are aware of that.
<div>still support 512B Logical Block Addressing. <br></div><div><br></div><div>All of these devices are essentially small computers. They have internal </div><div>controllers, DRAM caches usually in the 1-2GB sort of range but getting</div><div>larger. The bus speeds they quote is because data is moving for the most</div><div>part in and out of cache in the drive. </div><div><br></div><div>In Dale's case, if he has a 4K file system block size then it's going to send </div><div>4K to
<div>into flash.</div><div><br></div><div>What I *think* is true is that any time your file system block size is</div><div>smaller than the physical block size on the storage element then</div><div>simplistically you have the risk of writeamplification. </div><div><br></div>What I know I'm not sure about is how inodes factor into this. <br><br>For instance:<br><br>mark@science2:~$ ls -i<br>35790149 000_NOT_BACKED_UP<br>33320794 All_Files.txt<br>33337840 All_Sizes_2.txt<br>
I think technically they default to the physical block size internally
and the earlier ones, attempting to be more compatible with HDDs,
had 4K blocks. Some of the newer chips now have 16K blocks but
still support 512B Logical Block Addressing.
All of these devices are essentially small computers. They have internal controllers, DRAM caches usually in the 1-2GB sort of range but getting larger.
The bus speeds they quote is because data is moving for the most
part in and out of cache in the drive.
What I know I'm not sure about is how inodes factor into this.
For instance:
mark@science2:~$ ls -i
35790149 000_NOT_BACKED_UP
33320794 All_Files.txt
33337840 All_Sizes_2.txt
33337952 All_Sizes.txt
33329818 All_Sorted.txt
33306743 ardour_deps_install.sh
33309917 ardour_deps_remove.sh
33557560 Arena_Chess
33423859 Astro_Data
33560973 Astronomy
33423886 Astro_science
33307443 'Backup codes - Login.gov.pdf'
33329080 basic-install.sh
33558634 bin
33561132 biosim4_functions.txt
33316157 Boot_Config.txt
33560975 Builder
33338822 CFL_88_F_Bright_Syn.xsc
If the inodes are on the disk then how are they
stored? Does a single inode occupy a physical
block? A 512 byte LBA? Something else?
I wonder. Is there a way to find out the smallest size file in adirectory or sub directory, largest files, then maybe a average file
Mark Knecht wrote:
I wonder. Is there a way to find out the smallest size file in adirectory or sub directory, largest files, then maybe a average file
size??? I thought about du but given the number of files I have here,
it would be a really HUGE list of files. Could take hours or more
too. This is what KDE properties shows.
I'm sure there are more accurate ways but
sudo ls -R / | wc
give you the number of lines returned from the ls command. It's not
perfect as there are blank lines in the ls but it's a start.
My desktop machine has about 2.2M files.
Again, there are going to be folks who can tell you how to remove
blank lines and other cruft but it's a start.
Only takes a minute to run on my Ryzen 9 5950X. YMMV.
I did a right click on the directory in Dolphin and selected
properties. It told me there is a little over 55,000 files. Some 1,100 directories, not sure if directories use inodes or not. Basically, there
is a little over 56,000 somethings on that file system. I was curious
what the smallest file is and the largest. No idea how to find that
really. Even du separates by directory not individual files regardless
of directory. At least the way I use it anyway.
If I ever have to move things around again, I'll likely start a thread
just for figuring out the setting for inodes. I'll likely know more
about the number of files too.
Dale
:-) :-)
Visualise disk usage with interactive map of concentric, segmented rings
I wonder. Is there a way to find out the smallest size file in adirectory or sub directory, largest files, then maybe a average file
size??? I thought about du but given the number of files I have here, it would be a really HUGE list of files. Could take hours or more too. This
is what KDE properties shows.
I'm sure there are more accurate ways but
sudo ls -R / | wc
give you the number of lines returned from the ls command. It's not perfect as there are blank lines in the ls but it's a start.
My desktop machine has about 2.2M files.
Again, there are going to be folks who can tell you how to remove blank
lines and other cruft but it's a start.
Only takes a minute to run on my Ryzen 9 5950X. YMMV.
Frank Steinmetzger wrote:
<<<SNIP>>>
When formatting file systems, I usually lower the number of inodes from the
default value to gain storage space. The default is one inode per 16 kB of FS size, which gives you 60 million inodes per TB. In practice, even one million per TB would be overkill in a use case like Dale’s media storage.¹
Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB, not
counting extra control metadata and ext4 redundancies.
If I ever rearrange my
drives again and can change the file system, I may reduce the inodes at
least on the ones I only have large files on. Still tho, given I use
LVM and all, maybe that isn't a great idea. As I add drives with LVM, I assume it increases the inodes as well.
I wonder. Is there a way to find out the smallest size file in a
directory or sub directory, largest files, then maybe a average file
size???
I thought about du but given the number of files I have here,
it would be a really HUGE list of files. Could take hours or more too.
Frank Steinmetzger wrote:
Am Wed, Apr 19, 2023 at 06:32:45PM -0500 schrieb Dale:
Frank Steinmetzger wrote:
<<<SNIP>>>
When formatting file systems, I usually lower the number of inodes from >>> the
default value to gain storage space. The default is one inode per 16 kB >>> of
FS size, which gives you 60 million inodes per TB. In practice, even one >>> million per TB would be overkill in a use case like Dale’s media
storage.¹
Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB,
not counting extra control metadata and ext4 redundancies.
If I ever rearrange my
drives again and can change the file system, I may reduce the inodes at
least on the ones I only have large files on. Still tho, given I use
LVM and all, maybe that isn't a great idea. As I add drives with LVM, I >> assume it increases the inodes as well.
I remember from yesterday that the manpage says that inodes are added according to the bytes-per-inode value.
I wonder. Is there a way to find out the smallest size file in a
directory or sub directory, largest files, then maybe a average file
size???
The 20 smallest:
`find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20`
The 20 largest: either use tail instead of head or reverse sorting with
-r.
You can also first pipe the output of stat into a file so you can sort and analyse the list more efficiently, including calculating averages.
When I first run this while in / itself, it occurred to me that it
doesn't specify what directory. I thought maybe changing to the
directory I want it to look at would work but get this:
root@fireball /home/dale/Desktop/Crypt # `find -type f -print0 | xargs
-0 stat -c '%s %n' | sort -n | head -n 20`
-bash: 2: command not found
root@fireball /home/dale/Desktop/Crypt #
It works if I'm in the / directory but not when I'm cd'd to the
directory I want to know about. I don't see a spot to change it. Ideas.
On Wednesday, 19 April 2023 18:59:26 BST Dale wrote:
Peter Humphrey wrote:I did say it was for an NVMe drive, Dale. If your drive was one of those, the kernel would have named it /dev/nvme0n1 or similar.
On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:When I run that command, sdd is my SDD drive, ironic I know. Anyway, it
With my HDD:Or, with an NVMe drive:
# smartctl -x /dev/sda | grep -i 'sector size'
Sector Sizes: 512 bytes logical, 4096 bytes physical
# smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 + 512 0 0
:)
doesn't show block sizes. It returns nothing.
Peter Humphrey wrote:
On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
With my HDD:
# smartctl -x /dev/sda | grep -i 'sector size'
Sector Sizes: 512 bytes logical, 4096 bytes physical
Or, with an NVMe drive:
# smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 + 512 0 0
:)
When I run that command, sdd is my SDD drive, ironic I know. Anyway, it doesn't show block sizes. It returns nothing.
On Thursday, 20 April 2023 10:29:59 BST Dale wrote:
Frank Steinmetzger wrote:In place of "find -type..." say "find / -type..."
Am Wed, Apr 19, 2023 at 06:32:45PM -0500 schrieb Dale:When I first run this while in / itself, it occurred to me that it
Frank Steinmetzger wrote:I remember from yesterday that the manpage says that inodes are added
<<<SNIP>>>If I ever rearrange my
When formatting file systems, I usually lower the number of inodes from >>>>> the
default value to gain storage space. The default is one inode per 16 kB >>>>> of
FS size, which gives you 60 million inodes per TB. In practice, even one >>>>> million per TB would be overkill in a use case like Dale’s media
storage.¹
Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB,
not counting extra control metadata and ext4 redundancies.
drives again and can change the file system, I may reduce the inodes at >>>> least on the ones I only have large files on. Still tho, given I use
LVM and all, maybe that isn't a great idea. As I add drives with LVM, I >>>> assume it increases the inodes as well.
according to the bytes-per-inode value.
I wonder. Is there a way to find out the smallest size file in aThe 20 smallest:
directory or sub directory, largest files, then maybe a average file
size???
`find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20` >>>
The 20 largest: either use tail instead of head or reverse sorting with
-r.
You can also first pipe the output of stat into a file so you can sort and >>> analyse the list more efficiently, including calculating averages.
doesn't specify what directory. I thought maybe changing to the
directory I want it to look at would work but get this:
root@fireball /home/dale/Desktop/Crypt # `find -type f -print0 | xargs
-0 stat -c '%s %n' | sort -n | head -n 20`
-bash: 2: command not found
root@fireball /home/dale/Desktop/Crypt #
It works if I'm in the / directory but not when I'm cd'd to the
directory I want to know about. I don't see a spot to change it. Ideas.
I wonder. Is there a way to find out the smallest size file in aThe 20 smallest:
directory or sub directory, largest files, then maybe a average file
size???
`find -type f -print0 | xargs -0 stat -c '%s %n' | sort -n | head -n 20`
The 20 largest: either use tail instead of head or reverse sorting with -r. You can also first pipe the output of stat into a file so you can sort and analyse the list more efficiently, including calculating averages.
When I first run this while in / itself, it occurred to me that it
doesn't specify what directory. I thought maybe changing to the
directory I want it to look at would work but get this:
root@fireball /home/dale/Desktop/Crypt # `find -type f -print0 | xargs
-0 stat -c '%s %n' | sort -n | head -n 20`
-bash: 2: command not found
root@fireball /home/dale/Desktop/Crypt #
I thought about du but given the number of files I have here,I use a “cache” of text files with file listings of all my external drives.
it would be a really HUGE list of files. Could take hours or more too.
This allows me to glance over my entire data storage without having to plug
in any drive. It uses tree underneath to get the list:
`tree -afx -DFins --dirsfirst --du --timefmt "%Y-%m-%d %T"`
This gives me a list of all directories and files, with their full path, date and size information and accumulated directory size in a concise format. Add -pug to also include permissions.
Save this for later use. ;-)
In place of "find -type..." say "find / -type..."
Ahhh, that worked. I also realized I need to leave off the ' at the beginning and end. I thought I left those out. I copy and paste a
lot. lol
Some 1,100 directories, not sure if directories use inodes or not.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 302 |
Nodes: | 16 (2 / 14) |
Uptime: | 100:52:22 |
Calls: | 6,767 |
Calls today: | 5 |
Files: | 12,295 |
Messages: | 5,376,432 |
Posted today: | 1 |