I have this cycbuff.conf entry:
cycbuff:SBIN2:/usr/local/news/spool/bin1:102400000
Looking at this file with 'ls' the size lines up:
-rw-r--r-- 1 news news 104857600000 Oct 4 15:14 bin1
However, when I check actual disk space used with du it is much greater:
# du -h bin1
129G bin1
# du bin1
135118061 bin1
This buffer contained 'junk' so I decided to remove it, and did reclaim 129GB of space on the filesystem. How is this possible?
Hi Jesse,
I have this cycbuff.conf entry:
cycbuff:SBIN2:/usr/local/news/spool/bin1:102400000
Looking at this file with 'ls' the size lines up:
-rw-r--r-- 1 news news 104857600000 Oct 4 15:14 bin1
However, when I check actual disk space used with du it is much greater:
# du -h bin1
129G bin1
# du bin1
135118061 bin1
I've checked my CNFS buffers on Debian, and do not see such a difference.
cycbuff:FR1:/home/news/spool/cycbuffs/frcnfs:4194304
% du frcnfs
4194308 frcnfs
% ls -l
-rw-r--r-- 1 news news 4294967296 5 oct. 20:36 frcnfs
This buffer contained 'junk' so I decided to remove it, and did reclaim 129GB
of space on the filesystem. How is this possible?
I don't know. Is it the same for your other CNFS buffers?
It would have been interesting to see the output of "cnfsstat" on your
bin1. Do you still have a checkpoint for SBIN2 in news.notice to see
its reported length?
The only thing I see in news.notice about CNFS are lines like this:
innd[74417]: CNFS: CNFSflushallheads: flushing SBIN2
I've made a new buffer and will wait until it wraps to see. So far the size is
in line with what I expect
This could be a ZFS/FreeBSD issue, but figure I'd start here to see if anyone has seen this behavior before... I started to run tight on disk space on one of my servers and noticed that the CNFS buffers were using an amount of disk space that is larger than the 'actual' file size of the buffer.
I have this cycbuff.conf entry:
cycbuff:SBIN2:/usr/local/news/spool/bin1:102400000
Looking at this file with 'ls' the size lines up:
-rw-r--r-- 1 news news 104857600000 Oct 4 15:14 bin1
However, when I check actual disk space used with du it is much greater:
# du -h bin1
129G bin1
# du bin1
135118061 bin1
This buffer contained 'junk' so I decided to remove it, and did reclaim 129GB of space on the filesystem. How is this possible?
On 05.10.23 16:25, Jesse Rehmer wrote:
This could be a ZFS/FreeBSD issue, but figure I'd start here to see if anyone
has seen this behavior before... I started to run tight on disk space on one >> of my servers and noticed that the CNFS buffers were using an amount of disk >> space that is larger than the 'actual' file size of the buffer.
I have this cycbuff.conf entry:
cycbuff:SBIN2:/usr/local/news/spool/bin1:102400000
Looking at this file with 'ls' the size lines up:
-rw-r--r-- 1 news news 104857600000 Oct 4 15:14 bin1
However, when I check actual disk space used with du it is much greater:
# du -h bin1
129G bin1
# du bin1
135118061 bin1
This buffer contained 'junk' so I decided to remove it, and did reclaim 129GB
of space on the filesystem. How is this possible?
you have compression enabled?
how did you create the cycbuf?
as sparse via fallocate or filled via dd from /dev/zero?
i have zfs with compression lz4 and as you see 'ls -l' reports this size
but 'du -h' reports real used spaced from file.
ls -l
total 717894363
-rw-r--r-- 1 news news 107374182400 Oct 25 2022 BIG1.cyc
-rw-rw---- 1 news news 107374182400 Oct 27 2022 BIG2.cyc
-rw-rw---- 1 news news 107374182400 Dec 10 2022 BIG3.cyc
-rw-r--r-- 1 news news 107374182400 May 21 03:27 BIG4.cyc
-rw-r--r-- 1 news news 107374182400 Nov 1 2022 BIG5.cyc
-rw-r--r-- 1 news news 107374182400 Nov 1 2022 BIG6.cyc
-rw-rw---- 1 news news 107374182400 Oct 27 2022 LOW10.cyc
-rw-rw---- 1 news news 107374182400 Oct 27 2022 LOW11.cyc
-rw-rw---- 1 news news 107374182400 Oct 27 2022 LOW12.cyc
-rw-rw---- 1 news news 107374182400 Oct 27 2022 LOW13.cyc
-rw-rw---- 1 news news 107374182400 Oct 28 2022 LOW14.cyc
-rw-rw---- 1 news news 107374182400 Nov 4 2022 LOW15.cyc
-rw-rw---- 1 news news 107374182400 Apr 10 18:08 LOW16.cyc
-rw-rw---- 1 news news 107374182400 May 21 03:27 LOW17.cyc
-rw-rw---- 1 news news 107374182400 Nov 6 2022 LOW18.cyc
-rw-rw---- 1 news news 107374182400 Nov 6 2022 LOW19.cyc
-rw-r--r-- 1 news news 107374182400 Oct 23 2022 LOW1.cyc
-rw-rw---- 1 news news 107374182400 Nov 6 2022 LOW20.cyc
-rw-r--r-- 1 news news 107374182400 Oct 23 2022 LOW2.cyc
-rw-rw---- 1 news news 107374182400 Oct 24 2022 LOW3.cyc
-rw-rw---- 1 news news 107374182400 Oct 24 2022 LOW4.cyc
-rw-rw---- 1 news news 107374182400 Oct 25 2022 LOW5.cyc
-rw-rw---- 1 news news 107374182400 Oct 25 2022 LOW6.cyc
-rw-rw---- 1 news news 107374182400 Oct 26 2022 LOW7.cyc
-rw-rw---- 1 news news 107374182400 Oct 26 2022 LOW8.cyc
-rw-rw---- 1 news news 107374182400 Oct 26 2022 LOW9.cyc
-rw-r--r-- 1 news news 107374182400 May 21 03:26 MID1.cyc
-rw-rw---- 1 news news 107374182400 Oct 15 2022 MID2.cyc
-rw-rw---- 1 news news 107374182400 Oct 15 2022 MID3.cyc
-rw-rw---- 1 news news 107374182400 Oct 15 2022 MID4.cyc
-rw-rw---- 1 news news 107374182400 Oct 15 2022 MID5.cyc
-rw-rw---- 1 news news 107374182400 Oct 15 2022 MID6.cyc
du -h *
59G BIG1.cyc
50G BIG2.cyc
52G BIG3.cyc
4.2G BIG4.cyc
33K BIG5.cyc
33K BIG6.cyc
34G LOW10.cyc
34G LOW11.cyc
35G LOW12.cyc
36G LOW13.cyc
36G LOW14.cyc
29G LOW15.cyc
29G LOW16.cyc
4.7G LOW17.cyc
11K LOW18.cyc
11K LOW19.cyc
22G LOW1.cyc
11K LOW20.cyc
25G LOW2.cyc
21G LOW3.cyc
25G LOW4.cyc
33G LOW5.cyc
30G LOW6.cyc
31G LOW7.cyc
34G LOW8.cyc
36G LOW9.cyc
35G MID1.cyc
11K MID2.cyc
11K MID3.cyc
11K MID4.cyc
11K MID5.cyc
11K MID6.cyc
and 'du -b' reports other size...
du -b *
107374182400 BIG1.cyc
107374182400 BIG2.cyc
107374182400 BIG3.cyc
107374182400 BIG4.cyc
107374182400 BIG5.cyc
107374182400 BIG6.cyc
107374182400 LOW10.cyc
107374182400 LOW11.cyc
107374182400 LOW12.cyc
107374182400 LOW13.cyc
107374182400 LOW14.cyc
107374182400 LOW15.cyc
107374182400 LOW16.cyc
107374182400 LOW17.cyc
107374182400 LOW18.cyc
107374182400 LOW19.cyc
107374182400 LOW1.cyc
107374182400 LOW20.cyc
107374182400 LOW2.cyc
107374182400 LOW3.cyc
107374182400 LOW4.cyc
107374182400 LOW5.cyc
107374182400 LOW6.cyc
107374182400 LOW7.cyc
107374182400 LOW8.cyc
107374182400 LOW9.cyc
107374182400 MID1.cyc
107374182400 MID2.cyc
107374182400 MID3.cyc
107374182400 MID4.cyc
107374182400 MID5.cyc
107374182400 MID6.cyc
and finaly du *
du *
61314603 BIG1.cyc
51517375 BIG2.cyc
53918660 BIG3.cyc
4385048 BIG4.cyc
33 BIG5.cyc
33 BIG6.cyc
35449737 LOW10.cyc
35492501 LOW11.cyc
36521650 LOW12.cyc
37057675 LOW13.cyc
36731237 LOW14.cyc
30020730 LOW15.cyc
29718199 LOW16.cyc
4829014 LOW17.cyc
11 LOW18.cyc
11 LOW19.cyc
22134082 LOW1.cyc
11 LOW20.cyc
26145542 LOW2.cyc
21492682 LOW3.cyc
25516461 LOW4.cyc
33670575 LOW5.cyc
31290129 LOW6.cyc
31834280 LOW7.cyc
35086535 LOW8.cyc
37314320 LOW9.cyc
36453183 MID1.cyc
11 MID2.cyc
11 MID3.cyc
11 MID4.cyc
11 MID5.cyc
11 MID6.cyc
the size of the cycbuf directory 'du -hs' is 685G, this also reports
'zfs list' for the dataset. with compressratio 2.97x there :)
I do have compression enabled and created the buffers with dd from /dev/zero.
In your examples the actual size is smaller, which is what I would expect, but
my situation is reverse where the actual size used is larger by about 30%.
On 05.10.23 23:42, Jesse Rehmer wrote:
I do have compression enabled and created the buffers with dd from /dev/zero.
In your examples the actual size is smaller, which is what I would expect, but
my situation is reverse where the actual size used is larger by about 30%.
weird and can't get my head around.
inn would not start if filesize from conf and disk dont match.
i believe, i had such issue once while playing and copied wrong values
and inn failed to start or initialize the buffer.
maybe there is no check if it is already initialized and file has grown somehow.
but how could it grow bigger, i have no idea. should not be possible.
cycbuff puts data in some kind of blocks but did not dive deep enough to
get a better understanding.
would be cool if you did not delete the file, so we could further
investigate it.
On 05.10.23 23:42, Jesse Rehmer wrote:
I do have compression enabled and created the buffers with dd from /dev/zero.
In your examples the actual size is smaller, which is what I would expect, but
my situation is reverse where the actual size used is larger by about 30%.
weird and can't get my head around.
inn would not start if filesize from conf and disk dont match.
i believe, i had such issue once while playing and copied wrong values
and inn failed to start or initialize the buffer.
maybe there is no check if it is already initialized and file has grown somehow.
but how could it grow bigger, i have no idea. should not be possible.
cycbuff puts data in some kind of blocks but did not dive deep enough to
get a better understanding.
would be cool if you did not delete the file, so we could further
investigate it.
The new buffer has wrapped some times and is now larger than its 'apparent' size:
[news@spool1 ~/spool]$ du cnfs2
11739697 cnfs2
[news@spool1 ~/spool]$ du -A cnfs2
10000000 cnfs2
$ cnfsstat
Class SBIN for groups matching "control.cancel,junk,alt.binaries.*"
Buffer SBIN1, size: 9.54 GBytes, position: 7.32 GBytes 12.77 cycles
Newest: 2023-10-09 18:42:21, 0 days, 0:00:02 ago
On 10.10.23 01:43, Jesse Rehmer wrote:
The new buffer has wrapped some times and is now larger than its 'apparent' >> size:
[news@spool1 ~/spool]$ du cnfs2
11739697 cnfs2
[news@spool1 ~/spool]$ du -A cnfs2
10000000 cnfs2
$ cnfsstat
Class SBIN for groups matching "control.cancel,junk,alt.binaries.*"
Buffer SBIN1, size: 9.54 GBytes, position: 7.32 GBytes 12.77 cycles
Newest: 2023-10-09 18:42:21, 0 days, 0:00:02 ago
my OS lacks 'du -A'. i'm not on xBSD :/
what does 'du -b cnfs2' (bytes size) or 'du -h cnfs2' (human read.)?
as well as 'ls -l cnfs2' and 'ls -lh cnfs2'?
i#d mostly bet it is an optical illusion.
the file is only 10g, can you produce a copy for us?
or just move a copy to another filesystem, ext4?
what size value you set in cycbuff.conf to this buffer?
On Oct 10, 2023 at 2:38:12 AM CDT, ""Billy G." <go-while)" <no-reply@no.spam> wrote:
On 10.10.23 01:43, Jesse Rehmer wrote:
The new buffer has wrapped some times and is now larger than its 'apparent' >>> size:
[news@spool1 ~/spool]$ du cnfs2
11739697 cnfs2
[news@spool1 ~/spool]$ du -A cnfs2
10000000 cnfs2
$ cnfsstat
Class SBIN for groups matching "control.cancel,junk,alt.binaries.*"
Buffer SBIN1, size: 9.54 GBytes, position: 7.32 GBytes 12.77 cycles >>> Newest: 2023-10-09 18:42:21, 0 days, 0:00:02 ago
my OS lacks 'du -A'. i'm not on xBSD :/
what does 'du -b cnfs2' (bytes size) or 'du -h cnfs2' (human read.)?
as well as 'ls -l cnfs2' and 'ls -lh cnfs2'?
i#d mostly bet it is an optical illusion.
the file is only 10g, can you produce a copy for us?
or just move a copy to another filesystem, ext4?
what size value you set in cycbuff.conf to this buffer?
The size in cycbuff.conf is 10000000.
From the FreeBSD/ZFS box:
$ ls -laF cnfs2
-rw-r--r-- 1 news news 10240000000 Oct 10 06:08 cnfs2
$ ls -lh cnfs2
-rw-r--r-- 1 news news 9.5G Oct 10 06:08 cnfs2
FreeBSD's du does not have -b (it's the default)
$ du cnfs2
11967209 cnfs2
$ du -A cnfs2
10000000 cnfs2
$ du -h cnfs2
11G cnfs2
$ du -Ah cnfs2
9.5G cnfs2
I scp'd the file over to a Linux box with ext4, there it reports as I would expect:
# du -h cnfs2
9.6G cnfs2
# du -b cnfs2
10240000000 cnfs2
# ls -lb cnfs2
-rw-r--r--. 1 jesse jesse 10240000000 Oct 10 06:52 cnfs2
The new buffer has wrapped some times and is now larger than its 'apparent' size:
[news@spool1 ~/spool]$ du cnfs2
11739697 cnfs2
[news@spool1 ~/spool]$ du -A cnfs2
10000000 cnfs2
$ cnfsstat
Class SBIN for groups matching "control.cancel,junk,alt.binaries.*"
Buffer SBIN1, size: 9.54 GBytes, position: 7.32 GBytes 12.77 cycles
Newest: 2023-10-09 18:42:21, 0 days, 0:00:02 ago
On 10.10.23 15:38, Jesse Rehmer wrote:
There must be something with compression overhead or something that ZFS isn't
letting go of?
you do zfs snapshots for example, daily or hourly?
maybe does bsd refer to snapshot size as filesize?
There must be something with compression overhead or something that ZFS isn't letting go of?
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 304 |
Nodes: | 16 (1 / 15) |
Uptime: | 01:42:15 |
Calls: | 6,825 |
Calls today: | 5 |
Files: | 12,338 |
Messages: | 5,407,467 |