• CNFS buffer consuming more disk space than the file size?

    From Jesse Rehmer@21:1/5 to All on Thu Oct 5 14:25:25 2023
    This could be a ZFS/FreeBSD issue, but figure I'd start here to see if anyone has seen this behavior before... I started to run tight on disk space on one
    of my servers and noticed that the CNFS buffers were using an amount of disk space that is larger than the 'actual' file size of the buffer.

    I have this cycbuff.conf entry:

    cycbuff:SBIN2:/usr/local/news/spool/bin1:102400000

    Looking at this file with 'ls' the size lines up:

    -rw-r--r-- 1 news news 104857600000 Oct 4 15:14 bin1

    However, when I check actual disk space used with du it is much greater:

    # du -h bin1
    129G bin1

    # du bin1
    135118061 bin1

    This buffer contained 'junk' so I decided to remove it, and did reclaim 129GB of space on the filesystem. How is this possible?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Julien_=C3=89LIE?=@21:1/5 to All on Thu Oct 5 20:43:52 2023
    Hi Jesse,

    I have this cycbuff.conf entry:

    cycbuff:SBIN2:/usr/local/news/spool/bin1:102400000

    Looking at this file with 'ls' the size lines up:

    -rw-r--r-- 1 news news 104857600000 Oct 4 15:14 bin1

    However, when I check actual disk space used with du it is much greater:

    # du -h bin1
    129G bin1

    # du bin1
    135118061 bin1

    I've checked my CNFS buffers on Debian, and do not see such a difference.


    cycbuff:FR1:/home/news/spool/cycbuffs/frcnfs:4194304

    % du frcnfs
    4194308 frcnfs

    % ls -l
    -rw-r--r-- 1 news news 4294967296 5 oct. 20:36 frcnfs



    This buffer contained 'junk' so I decided to remove it, and did reclaim 129GB of space on the filesystem. How is this possible?

    I don't know. Is it the same for your other CNFS buffers?

    It would have been interesting to see the output of "cnfsstat" on your
    bin1. Do you still have a checkpoint for SBIN2 in news.notice to see
    its reported length?

    --
    Julien ÉLIE

    « Un homme plein de vices finit un jour ou l'autre sous écrou. » (Marc
    Escayrol)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jesse Rehmer@21:1/5 to iulius@nom-de-mon-site.com.invalid on Thu Oct 5 18:51:19 2023
    On Oct 5, 2023 at 1:43:52 PM CDT, "Julien ÉLIE" <iulius@nom-de-mon-site.com.invalid> wrote:

    Hi Jesse,

    I have this cycbuff.conf entry:

    cycbuff:SBIN2:/usr/local/news/spool/bin1:102400000

    Looking at this file with 'ls' the size lines up:

    -rw-r--r-- 1 news news 104857600000 Oct 4 15:14 bin1

    However, when I check actual disk space used with du it is much greater:

    # du -h bin1
    129G bin1

    # du bin1
    135118061 bin1

    I've checked my CNFS buffers on Debian, and do not see such a difference.


    cycbuff:FR1:/home/news/spool/cycbuffs/frcnfs:4194304

    % du frcnfs
    4194308 frcnfs

    % ls -l
    -rw-r--r-- 1 news news 4294967296 5 oct. 20:36 frcnfs



    This buffer contained 'junk' so I decided to remove it, and did reclaim 129GB
    of space on the filesystem. How is this possible?

    I don't know. Is it the same for your other CNFS buffers?

    It would have been interesting to see the output of "cnfsstat" on your
    bin1. Do you still have a checkpoint for SBIN2 in news.notice to see
    its reported length?

    There was another small 1GB buffer that was 1.2GB on disk, I deleted it too, unfortunately. The only thing I see in news.notice about CNFS are lines like this:

    innd[74417]: CNFS: CNFSflushallheads: flushing SBIN2

    I've made a new buffer and will wait until it wraps to see. So far the size is in line with what I expect:

    $ cnfsstat
    Class SBIN for groups matching "control.cancel,junk,alt.binaries.*"
    Buffer SBIN1, size: 9.54 GBytes, position: 9.21 GBytes 0.97 cycles
    Newest: 2023-10-05 13:47:44, 0 days, 0:00:06 ago

    $ du -h cnfs2
    9.3G cnfs2

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Julien_=C3=89LIE?=@21:1/5 to All on Thu Oct 5 20:57:12 2023
    Hi Jesse,

    The only thing I see in news.notice about CNFS are lines like this:

    innd[74417]: CNFS: CNFSflushallheads: flushing SBIN2

    Ah yes, there's an inn.conf parameter (docnfsstat) which is *false* by
    default, to regularly log the results of cnfsstat in news.notice.

    As I activated it a long time ago, I thought it was normal to have these
    logs :)


    I've made a new buffer and will wait until it wraps to see. So far the size is
    in line with what I expect

    OK, thanks. Keep us informed!

    --
    Julien ÉLIE

    « C'est tout un art que de parvenir à couper un gâteau de telle sorte
    que chacun croit en avoir eu la plus grosse part. »

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Billy G. (go-while)@21:1/5 to Jesse Rehmer on Thu Oct 5 23:39:32 2023
    On 05.10.23 16:25, Jesse Rehmer wrote:
    This could be a ZFS/FreeBSD issue, but figure I'd start here to see if anyone has seen this behavior before... I started to run tight on disk space on one of my servers and noticed that the CNFS buffers were using an amount of disk space that is larger than the 'actual' file size of the buffer.

    I have this cycbuff.conf entry:

    cycbuff:SBIN2:/usr/local/news/spool/bin1:102400000

    Looking at this file with 'ls' the size lines up:

    -rw-r--r-- 1 news news 104857600000 Oct 4 15:14 bin1

    However, when I check actual disk space used with du it is much greater:

    # du -h bin1
    129G bin1

    # du bin1
    135118061 bin1

    This buffer contained 'junk' so I decided to remove it, and did reclaim 129GB of space on the filesystem. How is this possible?

    you have compression enabled?
    how did you create the cycbuf?
    as sparse via fallocate or filled via dd from /dev/zero?

    i have zfs with compression lz4 and as you see 'ls -l' reports this size
    but 'du -h' reports real used spaced from file.

    ls -l
    total 717894363
    -rw-r--r-- 1 news news 107374182400 Oct 25 2022 BIG1.cyc
    -rw-rw---- 1 news news 107374182400 Oct 27 2022 BIG2.cyc
    -rw-rw---- 1 news news 107374182400 Dec 10 2022 BIG3.cyc
    -rw-r--r-- 1 news news 107374182400 May 21 03:27 BIG4.cyc
    -rw-r--r-- 1 news news 107374182400 Nov 1 2022 BIG5.cyc
    -rw-r--r-- 1 news news 107374182400 Nov 1 2022 BIG6.cyc
    -rw-rw---- 1 news news 107374182400 Oct 27 2022 LOW10.cyc
    -rw-rw---- 1 news news 107374182400 Oct 27 2022 LOW11.cyc
    -rw-rw---- 1 news news 107374182400 Oct 27 2022 LOW12.cyc
    -rw-rw---- 1 news news 107374182400 Oct 27 2022 LOW13.cyc
    -rw-rw---- 1 news news 107374182400 Oct 28 2022 LOW14.cyc
    -rw-rw---- 1 news news 107374182400 Nov 4 2022 LOW15.cyc
    -rw-rw---- 1 news news 107374182400 Apr 10 18:08 LOW16.cyc
    -rw-rw---- 1 news news 107374182400 May 21 03:27 LOW17.cyc
    -rw-rw---- 1 news news 107374182400 Nov 6 2022 LOW18.cyc
    -rw-rw---- 1 news news 107374182400 Nov 6 2022 LOW19.cyc
    -rw-r--r-- 1 news news 107374182400 Oct 23 2022 LOW1.cyc
    -rw-rw---- 1 news news 107374182400 Nov 6 2022 LOW20.cyc
    -rw-r--r-- 1 news news 107374182400 Oct 23 2022 LOW2.cyc
    -rw-rw---- 1 news news 107374182400 Oct 24 2022 LOW3.cyc
    -rw-rw---- 1 news news 107374182400 Oct 24 2022 LOW4.cyc
    -rw-rw---- 1 news news 107374182400 Oct 25 2022 LOW5.cyc
    -rw-rw---- 1 news news 107374182400 Oct 25 2022 LOW6.cyc
    -rw-rw---- 1 news news 107374182400 Oct 26 2022 LOW7.cyc
    -rw-rw---- 1 news news 107374182400 Oct 26 2022 LOW8.cyc
    -rw-rw---- 1 news news 107374182400 Oct 26 2022 LOW9.cyc
    -rw-r--r-- 1 news news 107374182400 May 21 03:26 MID1.cyc
    -rw-rw---- 1 news news 107374182400 Oct 15 2022 MID2.cyc
    -rw-rw---- 1 news news 107374182400 Oct 15 2022 MID3.cyc
    -rw-rw---- 1 news news 107374182400 Oct 15 2022 MID4.cyc
    -rw-rw---- 1 news news 107374182400 Oct 15 2022 MID5.cyc
    -rw-rw---- 1 news news 107374182400 Oct 15 2022 MID6.cyc


    du -h *
    59G BIG1.cyc
    50G BIG2.cyc
    52G BIG3.cyc
    4.2G BIG4.cyc
    33K BIG5.cyc
    33K BIG6.cyc
    34G LOW10.cyc
    34G LOW11.cyc
    35G LOW12.cyc
    36G LOW13.cyc
    36G LOW14.cyc
    29G LOW15.cyc
    29G LOW16.cyc
    4.7G LOW17.cyc
    11K LOW18.cyc
    11K LOW19.cyc
    22G LOW1.cyc
    11K LOW20.cyc
    25G LOW2.cyc
    21G LOW3.cyc
    25G LOW4.cyc
    33G LOW5.cyc
    30G LOW6.cyc
    31G LOW7.cyc
    34G LOW8.cyc
    36G LOW9.cyc
    35G MID1.cyc
    11K MID2.cyc
    11K MID3.cyc
    11K MID4.cyc
    11K MID5.cyc
    11K MID6.cyc

    and 'du -b' reports other size...

    du -b *
    107374182400 BIG1.cyc
    107374182400 BIG2.cyc
    107374182400 BIG3.cyc
    107374182400 BIG4.cyc
    107374182400 BIG5.cyc
    107374182400 BIG6.cyc
    107374182400 LOW10.cyc
    107374182400 LOW11.cyc
    107374182400 LOW12.cyc
    107374182400 LOW13.cyc
    107374182400 LOW14.cyc
    107374182400 LOW15.cyc
    107374182400 LOW16.cyc
    107374182400 LOW17.cyc
    107374182400 LOW18.cyc
    107374182400 LOW19.cyc
    107374182400 LOW1.cyc
    107374182400 LOW20.cyc
    107374182400 LOW2.cyc
    107374182400 LOW3.cyc
    107374182400 LOW4.cyc
    107374182400 LOW5.cyc
    107374182400 LOW6.cyc
    107374182400 LOW7.cyc
    107374182400 LOW8.cyc
    107374182400 LOW9.cyc
    107374182400 MID1.cyc
    107374182400 MID2.cyc
    107374182400 MID3.cyc
    107374182400 MID4.cyc
    107374182400 MID5.cyc
    107374182400 MID6.cyc

    and finaly du *

    du *
    61314603 BIG1.cyc
    51517375 BIG2.cyc
    53918660 BIG3.cyc
    4385048 BIG4.cyc
    33 BIG5.cyc
    33 BIG6.cyc
    35449737 LOW10.cyc
    35492501 LOW11.cyc
    36521650 LOW12.cyc
    37057675 LOW13.cyc
    36731237 LOW14.cyc
    30020730 LOW15.cyc
    29718199 LOW16.cyc
    4829014 LOW17.cyc
    11 LOW18.cyc
    11 LOW19.cyc
    22134082 LOW1.cyc
    11 LOW20.cyc
    26145542 LOW2.cyc
    21492682 LOW3.cyc
    25516461 LOW4.cyc
    33670575 LOW5.cyc
    31290129 LOW6.cyc
    31834280 LOW7.cyc
    35086535 LOW8.cyc
    37314320 LOW9.cyc
    36453183 MID1.cyc
    11 MID2.cyc
    11 MID3.cyc
    11 MID4.cyc
    11 MID5.cyc
    11 MID6.cyc

    the size of the cycbuf directory 'du -hs' is 685G, this also reports
    'zfs list' for the dataset. with compressratio 2.97x there :)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jesse Rehmer@21:1/5 to All on Thu Oct 5 21:42:55 2023
    On Oct 5, 2023 at 4:39:32 PM CDT, ""Billy G." <go-while)" <no-reply@no.spam> wrote:

    On 05.10.23 16:25, Jesse Rehmer wrote:
    This could be a ZFS/FreeBSD issue, but figure I'd start here to see if anyone
    has seen this behavior before... I started to run tight on disk space on one >> of my servers and noticed that the CNFS buffers were using an amount of disk >> space that is larger than the 'actual' file size of the buffer.

    I have this cycbuff.conf entry:

    cycbuff:SBIN2:/usr/local/news/spool/bin1:102400000

    Looking at this file with 'ls' the size lines up:

    -rw-r--r-- 1 news news 104857600000 Oct 4 15:14 bin1

    However, when I check actual disk space used with du it is much greater:

    # du -h bin1
    129G bin1

    # du bin1
    135118061 bin1

    This buffer contained 'junk' so I decided to remove it, and did reclaim 129GB
    of space on the filesystem. How is this possible?

    you have compression enabled?
    how did you create the cycbuf?
    as sparse via fallocate or filled via dd from /dev/zero?

    i have zfs with compression lz4 and as you see 'ls -l' reports this size
    but 'du -h' reports real used spaced from file.

    ls -l
    total 717894363
    -rw-r--r-- 1 news news 107374182400 Oct 25 2022 BIG1.cyc
    -rw-rw---- 1 news news 107374182400 Oct 27 2022 BIG2.cyc
    -rw-rw---- 1 news news 107374182400 Dec 10 2022 BIG3.cyc
    -rw-r--r-- 1 news news 107374182400 May 21 03:27 BIG4.cyc
    -rw-r--r-- 1 news news 107374182400 Nov 1 2022 BIG5.cyc
    -rw-r--r-- 1 news news 107374182400 Nov 1 2022 BIG6.cyc
    -rw-rw---- 1 news news 107374182400 Oct 27 2022 LOW10.cyc
    -rw-rw---- 1 news news 107374182400 Oct 27 2022 LOW11.cyc
    -rw-rw---- 1 news news 107374182400 Oct 27 2022 LOW12.cyc
    -rw-rw---- 1 news news 107374182400 Oct 27 2022 LOW13.cyc
    -rw-rw---- 1 news news 107374182400 Oct 28 2022 LOW14.cyc
    -rw-rw---- 1 news news 107374182400 Nov 4 2022 LOW15.cyc
    -rw-rw---- 1 news news 107374182400 Apr 10 18:08 LOW16.cyc
    -rw-rw---- 1 news news 107374182400 May 21 03:27 LOW17.cyc
    -rw-rw---- 1 news news 107374182400 Nov 6 2022 LOW18.cyc
    -rw-rw---- 1 news news 107374182400 Nov 6 2022 LOW19.cyc
    -rw-r--r-- 1 news news 107374182400 Oct 23 2022 LOW1.cyc
    -rw-rw---- 1 news news 107374182400 Nov 6 2022 LOW20.cyc
    -rw-r--r-- 1 news news 107374182400 Oct 23 2022 LOW2.cyc
    -rw-rw---- 1 news news 107374182400 Oct 24 2022 LOW3.cyc
    -rw-rw---- 1 news news 107374182400 Oct 24 2022 LOW4.cyc
    -rw-rw---- 1 news news 107374182400 Oct 25 2022 LOW5.cyc
    -rw-rw---- 1 news news 107374182400 Oct 25 2022 LOW6.cyc
    -rw-rw---- 1 news news 107374182400 Oct 26 2022 LOW7.cyc
    -rw-rw---- 1 news news 107374182400 Oct 26 2022 LOW8.cyc
    -rw-rw---- 1 news news 107374182400 Oct 26 2022 LOW9.cyc
    -rw-r--r-- 1 news news 107374182400 May 21 03:26 MID1.cyc
    -rw-rw---- 1 news news 107374182400 Oct 15 2022 MID2.cyc
    -rw-rw---- 1 news news 107374182400 Oct 15 2022 MID3.cyc
    -rw-rw---- 1 news news 107374182400 Oct 15 2022 MID4.cyc
    -rw-rw---- 1 news news 107374182400 Oct 15 2022 MID5.cyc
    -rw-rw---- 1 news news 107374182400 Oct 15 2022 MID6.cyc


    du -h *
    59G BIG1.cyc
    50G BIG2.cyc
    52G BIG3.cyc
    4.2G BIG4.cyc
    33K BIG5.cyc
    33K BIG6.cyc
    34G LOW10.cyc
    34G LOW11.cyc
    35G LOW12.cyc
    36G LOW13.cyc
    36G LOW14.cyc
    29G LOW15.cyc
    29G LOW16.cyc
    4.7G LOW17.cyc
    11K LOW18.cyc
    11K LOW19.cyc
    22G LOW1.cyc
    11K LOW20.cyc
    25G LOW2.cyc
    21G LOW3.cyc
    25G LOW4.cyc
    33G LOW5.cyc
    30G LOW6.cyc
    31G LOW7.cyc
    34G LOW8.cyc
    36G LOW9.cyc
    35G MID1.cyc
    11K MID2.cyc
    11K MID3.cyc
    11K MID4.cyc
    11K MID5.cyc
    11K MID6.cyc

    and 'du -b' reports other size...

    du -b *
    107374182400 BIG1.cyc
    107374182400 BIG2.cyc
    107374182400 BIG3.cyc
    107374182400 BIG4.cyc
    107374182400 BIG5.cyc
    107374182400 BIG6.cyc
    107374182400 LOW10.cyc
    107374182400 LOW11.cyc
    107374182400 LOW12.cyc
    107374182400 LOW13.cyc
    107374182400 LOW14.cyc
    107374182400 LOW15.cyc
    107374182400 LOW16.cyc
    107374182400 LOW17.cyc
    107374182400 LOW18.cyc
    107374182400 LOW19.cyc
    107374182400 LOW1.cyc
    107374182400 LOW20.cyc
    107374182400 LOW2.cyc
    107374182400 LOW3.cyc
    107374182400 LOW4.cyc
    107374182400 LOW5.cyc
    107374182400 LOW6.cyc
    107374182400 LOW7.cyc
    107374182400 LOW8.cyc
    107374182400 LOW9.cyc
    107374182400 MID1.cyc
    107374182400 MID2.cyc
    107374182400 MID3.cyc
    107374182400 MID4.cyc
    107374182400 MID5.cyc
    107374182400 MID6.cyc

    and finaly du *

    du *
    61314603 BIG1.cyc
    51517375 BIG2.cyc
    53918660 BIG3.cyc
    4385048 BIG4.cyc
    33 BIG5.cyc
    33 BIG6.cyc
    35449737 LOW10.cyc
    35492501 LOW11.cyc
    36521650 LOW12.cyc
    37057675 LOW13.cyc
    36731237 LOW14.cyc
    30020730 LOW15.cyc
    29718199 LOW16.cyc
    4829014 LOW17.cyc
    11 LOW18.cyc
    11 LOW19.cyc
    22134082 LOW1.cyc
    11 LOW20.cyc
    26145542 LOW2.cyc
    21492682 LOW3.cyc
    25516461 LOW4.cyc
    33670575 LOW5.cyc
    31290129 LOW6.cyc
    31834280 LOW7.cyc
    35086535 LOW8.cyc
    37314320 LOW9.cyc
    36453183 MID1.cyc
    11 MID2.cyc
    11 MID3.cyc
    11 MID4.cyc
    11 MID5.cyc
    11 MID6.cyc

    the size of the cycbuf directory 'du -hs' is 685G, this also reports
    'zfs list' for the dataset. with compressratio 2.97x there :)

    I do have compression enabled and created the buffers with dd from /dev/zero.

    In your examples the actual size is smaller, which is what I would expect, but my situation is reverse where the actual size used is larger by about 30%.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Billy G. (go-while)@21:1/5 to All on Thu Oct 5 23:42:17 2023
    to add a note, the buffers holding about 35G are 100% full. but du -hs
    shows the compressed size.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Billy G. (go-while)@21:1/5 to Jesse Rehmer on Fri Oct 6 22:22:12 2023
    On 05.10.23 23:42, Jesse Rehmer wrote:
    I do have compression enabled and created the buffers with dd from /dev/zero.

    In your examples the actual size is smaller, which is what I would expect, but
    my situation is reverse where the actual size used is larger by about 30%.


    weird and can't get my head around.
    inn would not start if filesize from conf and disk dont match.
    i believe, i had such issue once while playing and copied wrong values
    and inn failed to start or initialize the buffer.
    maybe there is no check if it is already initialized and file has grown somehow.
    but how could it grow bigger, i have no idea. should not be possible.
    cycbuff puts data in some kind of blocks but did not dive deep enough to
    get a better understanding.

    would be cool if you did not delete the file, so we could further
    investigate it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jesse Rehmer@21:1/5 to All on Fri Oct 6 20:19:05 2023
    On Oct 6, 2023 at 3:22:12 PM CDT, ""Billy G." <go-while)" <no-reply@no.spam> wrote:

    On 05.10.23 23:42, Jesse Rehmer wrote:
    I do have compression enabled and created the buffers with dd from /dev/zero.

    In your examples the actual size is smaller, which is what I would expect, but
    my situation is reverse where the actual size used is larger by about 30%.


    weird and can't get my head around.
    inn would not start if filesize from conf and disk dont match.
    i believe, i had such issue once while playing and copied wrong values
    and inn failed to start or initialize the buffer.
    maybe there is no check if it is already initialized and file has grown somehow.
    but how could it grow bigger, i have no idea. should not be possible.
    cycbuff puts data in some kind of blocks but did not dive deep enough to
    get a better understanding.

    would be cool if you did not delete the file, so we could further
    investigate it.

    I know, I know, but I needed the space and was frustrated. I created a smaller one in its place, I'll monitor the size.

    I do not understand very much about how filesystems work, just the very basic things taught to me 30 years ago, but I went looking for ZFS related problems online. I found a case or two that on the surface sounded similar but were referring to odd situations where compression overhead would end being greater than the compression provided, and these issues seemed to stem from something relating to block sizes that I didn't understand.

    I also wondered if it could be some weird issue with ZFS snapshots and the way CNFS stores data, maybe the snapshot removal isn't clearing up all the referenced blocks it should, but this seems far-fetched.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jesse Rehmer@21:1/5 to All on Mon Oct 9 23:43:14 2023
    On Oct 6, 2023 at 3:22:12 PM CDT, ""Billy G." <go-while)" <no-reply@no.spam> wrote:

    On 05.10.23 23:42, Jesse Rehmer wrote:
    I do have compression enabled and created the buffers with dd from /dev/zero.

    In your examples the actual size is smaller, which is what I would expect, but
    my situation is reverse where the actual size used is larger by about 30%.


    weird and can't get my head around.
    inn would not start if filesize from conf and disk dont match.
    i believe, i had such issue once while playing and copied wrong values
    and inn failed to start or initialize the buffer.
    maybe there is no check if it is already initialized and file has grown somehow.
    but how could it grow bigger, i have no idea. should not be possible.
    cycbuff puts data in some kind of blocks but did not dive deep enough to
    get a better understanding.

    would be cool if you did not delete the file, so we could further
    investigate it.

    The new buffer has wrapped some times and is now larger than its 'apparent' size:

    [news@spool1 ~/spool]$ du cnfs2
    11739697 cnfs2
    [news@spool1 ~/spool]$ du -A cnfs2
    10000000 cnfs2

    $ cnfsstat
    Class SBIN for groups matching "control.cancel,junk,alt.binaries.*"
    Buffer SBIN1, size: 9.54 GBytes, position: 7.32 GBytes 12.77 cycles
    Newest: 2023-10-09 18:42:21, 0 days, 0:00:02 ago

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Billy G. (go-while)@21:1/5 to Jesse Rehmer on Tue Oct 10 09:38:12 2023
    On 10.10.23 01:43, Jesse Rehmer wrote:
    The new buffer has wrapped some times and is now larger than its 'apparent' size:

    [news@spool1 ~/spool]$ du cnfs2
    11739697 cnfs2
    [news@spool1 ~/spool]$ du -A cnfs2
    10000000 cnfs2

    $ cnfsstat
    Class SBIN for groups matching "control.cancel,junk,alt.binaries.*"
    Buffer SBIN1, size: 9.54 GBytes, position: 7.32 GBytes 12.77 cycles
    Newest: 2023-10-09 18:42:21, 0 days, 0:00:02 ago

    my OS lacks 'du -A'. i'm not on xBSD :/

    what does 'du -b cnfs2' (bytes size) or 'du -h cnfs2' (human read.)?

    as well as 'ls -l cnfs2' and 'ls -lh cnfs2'?

    i#d mostly bet it is an optical illusion.

    the file is only 10g, can you produce a copy for us?

    or just move a copy to another filesystem, ext4?

    what size value you set in cycbuff.conf to this buffer?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jesse Rehmer@21:1/5 to All on Tue Oct 10 11:57:41 2023
    On Oct 10, 2023 at 2:38:12 AM CDT, ""Billy G." <go-while)" <no-reply@no.spam> wrote:

    On 10.10.23 01:43, Jesse Rehmer wrote:
    The new buffer has wrapped some times and is now larger than its 'apparent' >> size:

    [news@spool1 ~/spool]$ du cnfs2
    11739697 cnfs2
    [news@spool1 ~/spool]$ du -A cnfs2
    10000000 cnfs2

    $ cnfsstat
    Class SBIN for groups matching "control.cancel,junk,alt.binaries.*"
    Buffer SBIN1, size: 9.54 GBytes, position: 7.32 GBytes 12.77 cycles
    Newest: 2023-10-09 18:42:21, 0 days, 0:00:02 ago

    my OS lacks 'du -A'. i'm not on xBSD :/

    what does 'du -b cnfs2' (bytes size) or 'du -h cnfs2' (human read.)?

    as well as 'ls -l cnfs2' and 'ls -lh cnfs2'?

    i#d mostly bet it is an optical illusion.

    the file is only 10g, can you produce a copy for us?

    or just move a copy to another filesystem, ext4?

    what size value you set in cycbuff.conf to this buffer?

    The size in cycbuff.conf is 10000000.

    From the FreeBSD/ZFS box:

    $ ls -laF cnfs2
    -rw-r--r-- 1 news news 10240000000 Oct 10 06:08 cnfs2

    $ ls -lh cnfs2
    -rw-r--r-- 1 news news 9.5G Oct 10 06:08 cnfs2

    FreeBSD's du does not have -b (it's the default)

    $ du cnfs2
    11967209 cnfs2

    $ du -A cnfs2
    10000000 cnfs2

    $ du -h cnfs2
    11G cnfs2

    $ du -Ah cnfs2
    9.5G cnfs2


    I scp'd the file over to a Linux box with ext4, there it reports as I would expect:

    # du -h cnfs2
    9.6G cnfs2

    # du -b cnfs2
    10240000000 cnfs2

    # ls -lb cnfs2
    -rw-r--r--. 1 jesse jesse 10240000000 Oct 10 06:52 cnfs2

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jesse Rehmer@21:1/5 to jesse.rehmer@blueworldhosting.com on Tue Oct 10 13:38:29 2023
    On Oct 10, 2023 at 6:57:41 AM CDT, "Jesse Rehmer" <jesse.rehmer@blueworldhosting.com> wrote:

    On Oct 10, 2023 at 2:38:12 AM CDT, ""Billy G." <go-while)" <no-reply@no.spam> wrote:

    On 10.10.23 01:43, Jesse Rehmer wrote:
    The new buffer has wrapped some times and is now larger than its 'apparent' >>> size:

    [news@spool1 ~/spool]$ du cnfs2
    11739697 cnfs2
    [news@spool1 ~/spool]$ du -A cnfs2
    10000000 cnfs2

    $ cnfsstat
    Class SBIN for groups matching "control.cancel,junk,alt.binaries.*"
    Buffer SBIN1, size: 9.54 GBytes, position: 7.32 GBytes 12.77 cycles >>> Newest: 2023-10-09 18:42:21, 0 days, 0:00:02 ago

    my OS lacks 'du -A'. i'm not on xBSD :/

    what does 'du -b cnfs2' (bytes size) or 'du -h cnfs2' (human read.)?

    as well as 'ls -l cnfs2' and 'ls -lh cnfs2'?

    i#d mostly bet it is an optical illusion.

    the file is only 10g, can you produce a copy for us?

    or just move a copy to another filesystem, ext4?

    what size value you set in cycbuff.conf to this buffer?

    The size in cycbuff.conf is 10000000.

    From the FreeBSD/ZFS box:

    $ ls -laF cnfs2
    -rw-r--r-- 1 news news 10240000000 Oct 10 06:08 cnfs2

    $ ls -lh cnfs2
    -rw-r--r-- 1 news news 9.5G Oct 10 06:08 cnfs2

    FreeBSD's du does not have -b (it's the default)

    $ du cnfs2
    11967209 cnfs2

    $ du -A cnfs2
    10000000 cnfs2

    $ du -h cnfs2
    11G cnfs2

    $ du -Ah cnfs2
    9.5G cnfs2


    I scp'd the file over to a Linux box with ext4, there it reports as I would expect:

    # du -h cnfs2
    9.6G cnfs2

    # du -b cnfs2
    10240000000 cnfs2

    # ls -lb cnfs2
    -rw-r--r--. 1 jesse jesse 10240000000 Oct 10 06:52 cnfs2

    I realized after I sent this that bytes isn't the default of du in FreeBSD, derp. Interestingly, I recopied this file back to the original box, in a different folder, and it's a smaller size.

    There must be something with compression overhead or something that ZFS isn't letting go of?

    # du -A cnfs2
    10000000 cnfs2

    # du cnfs2
    9781913 cnfs2

    # du -h cnfs2
    9.3G cnfs2

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Russ Allbery@21:1/5 to Jesse Rehmer on Tue Oct 10 08:48:24 2023
    Jesse Rehmer <jesse.rehmer@blueworldhosting.com> writes:

    The new buffer has wrapped some times and is now larger than its 'apparent' size:

    [news@spool1 ~/spool]$ du cnfs2
    11739697 cnfs2
    [news@spool1 ~/spool]$ du -A cnfs2
    10000000 cnfs2

    $ cnfsstat
    Class SBIN for groups matching "control.cancel,junk,alt.binaries.*"
    Buffer SBIN1, size: 9.54 GBytes, position: 7.32 GBytes 12.77 cycles
    Newest: 2023-10-09 18:42:21, 0 days, 0:00:02 ago

    On an ext4 file system, I only see the expected file system overhead and
    block size rounding.

    $ du *
    10485764 B01
    10485764 B02
    10485764 B03
    10485764 B04
    $ du --apparent-size *
    10485760 B01
    10485760 B02
    10485760 B03
    10485760 B04

    That's just one data point, but that does support my feeling that whatever
    is going on here is something related to ZFS specifically, as opposed to,
    say, a bug in INN that's causing it to write past the end of the cycbuff
    or something.

    --
    Russ Allbery (eagle@eyrie.org) <https://www.eyrie.org/~eagle/>

    Please post questions rather than mailing me directly.
    <https://www.eyrie.org/~eagle/faqs/questions.html> explains why.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jesse Rehmer@21:1/5 to All on Tue Oct 10 16:58:19 2023
    On Oct 10, 2023 at 11:44:33 AM CDT, ""Billy G." <go-while)" <no-reply@no.spam> wrote:

    On 10.10.23 15:38, Jesse Rehmer wrote:

    There must be something with compression overhead or something that ZFS isn't
    letting go of?

    you do zfs snapshots for example, daily or hourly?

    maybe does bsd refer to snapshot size as filesize?

    The only time I take snapshots is the automatic ones freebsd-update performs.

    Something is wonky with ZFS, but what I have no idea, I just copied the file back from Linux to FreeBSD:

    # du -A cnfs2
    10000000 cnfs2

    # du cnfs2
    20224841 cnfs2

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Billy G. (go-while)@21:1/5 to Jesse Rehmer on Tue Oct 10 18:44:33 2023
    On 10.10.23 15:38, Jesse Rehmer wrote:

    There must be something with compression overhead or something that ZFS isn't letting go of?

    you do zfs snapshots for example, daily or hourly?

    maybe does bsd refer to snapshot size as filesize?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)