• cups flooding the var/log/cups/error_log with "erasing documents" erro

    From William Unruh@2:250/1 to All on Tue Jul 21 22:25:27 2020
    Subject: cups flooding the var/log/cups/error_log with "erasing documents"
    errors

    Mageia 7 updated. I have a cups server which prints to an attached
    printer. Looking at the error_log file (yes I print to a file rather
    than the rediculous journald logging system) I get repeated statements
    (after doing
    grep '\[Job' error_log)



    D [19/Jul/2020:03:54:48 -0700] [Job 309] Removing document files
    repeated every 30 sec or so for at least 3 days. I then tried to restart
    cups, and then got
    .....
    D [21/Jul/2020:09:13:11 -0700] [Job 521] Removing document files
    D [21/Jul/2020:09:13:11 -0700] [Job 522] Removing document files
    D [21/Jul/2020:09:13:11 -0700] [Job 523] Removing document files.
    ....
    going through almost all of the Job numbers from 1 to 540, about couple of minutes.




    I finally gave up and erased all of the files in /var/spool/cups and
    that finally shut the system up, but that is clearly not what I should be doing.
    Note that "Removing document files." never actually succeeds in removing anything. All of the c00xxx files were still left, and the few d00xxx
    files were also not removed.

    Once I did the above I finally got
    E [21/Jul/2020:09:13:12 -0700] [Job 170] Files have gone away.

    This has just started happening recently.


    --- MBSE BBS v1.0.7.17 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From Bit Twister@2:250/1 to All on Wed Jul 22 10:50:29 2020
    Subject: Re: cups flooding the var/log/cups/error_log with "erasing
    documents" errors

    On Tue, 21 Jul 2020 21:25:27 -0000 (UTC), William Unruh wrote:
    Mageia 7 updated. I have a cups server which prints to an attached
    printer. Looking at the error_log file (yes I print to a file rather
    than the rediculous journald logging system) I get repeated statements
    (after doing
    grep '\[Job' error_log)



    D [19/Jul/2020:03:54:48 -0700] [Job 309] Removing document files
    repeated every 30 sec or so for at least 3 days. I then tried to restart cups, and then got
    ....
    D [21/Jul/2020:09:13:11 -0700] [Job 521] Removing document files
    D [21/Jul/2020:09:13:11 -0700] [Job 522] Removing document files
    D [21/Jul/2020:09:13:11 -0700] [Job 523] Removing document files.
    ...
    going through almost all of the Job numbers from 1 to 540, about couple of
    minutes.

    Might I suggest bookmarking the following URL
    https://www.google.com/advanced_search

    putting cups in the first box
    and Removing document files
    in the second box, gets me
    About 342 results (0.35 seconds)

    If you do find a solution, it would be nice if you posted it with
    [SOLUTION] in the Subject line.

    --- MBSE BBS v1.0.7.17 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From William Unruh@2:250/1 to All on Wed Jul 22 22:55:49 2020
    Subject: Re: cups flooding the var/log/cups/error_log with "erasing
    documents" errors

    On 2020-07-22, Bit Twister <BitTwister@mouse-potato.com> wrote:
    On Tue, 21 Jul 2020 21:25:27 -0000 (UTC), William Unruh wrote:
    Mageia 7 updated. I have a cups server which prints to an attached
    printer. Looking at the error_log file (yes I print to a file rather
    than the rediculous journald logging system) I get repeated statements
    (after doing
    grep '\[Job' error_log)



    D [19/Jul/2020:03:54:48 -0700] [Job 309] Removing document files
    repeated every 30 sec or so for at least 3 days. I then tried to restart
    cups, and then got
    ....
    D [21/Jul/2020:09:13:11 -0700] [Job 521] Removing document files
    D [21/Jul/2020:09:13:11 -0700] [Job 522] Removing document files
    D [21/Jul/2020:09:13:11 -0700] [Job 523] Removing document files.
    ...
    going through almost all of the Job numbers from 1 to 540, about couple of minutes.

    Might I suggest bookmarking the following URL
    https://www.google.com/advanced_search

    putting cups in the first box
    and Removing document files
    in the second box, gets me
    About 342 results (0.35 seconds)

    Unfortunately had you looked at them you would have discovered none
    offered a solution. When I searched for
    cups repeated error messages removing document files
    I got 26,500,000 results, and one was a bug report on Ubuntu which
    mentioned the same problem I had but no fix from Jun 2019.
    with cups 2.1.3 (I have 2.1.13) The bug is reported as confirmed.
    One notice says that running "cancel -x -a" which wipes everything from /var/spool/cups "solves" the problem, but then that would me you would
    have to keep track of the state of the error_log file (before they fill
    the log space)


    If you do find a solution, it would be nice if you posted it with
    [SOLUTION] in the Subject line.

    --- MBSE BBS v1.0.7.17 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From Bit Twister@2:250/1 to All on Thu Jul 23 00:50:13 2020
    Subject: Re: cups flooding the var/log/cups/error_log with "erasing
    documents" errors

    On Wed, 22 Jul 2020 21:55:49 -0000 (UTC), William Unruh wrote:
    On 2020-07-22, Bit Twister <BitTwister@mouse-potato.com> wrote:
    On Tue, 21 Jul 2020 21:25:27 -0000 (UTC), William Unruh wrote:
    Mageia 7 updated. I have a cups server which prints to an attached
    printer. Looking at the error_log file (yes I print to a file rather
    than the rediculous journald logging system) I get repeated statements
    (after doing
    grep '\[Job' error_log)



    D [19/Jul/2020:03:54:48 -0700] [Job 309] Removing document files
    repeated every 30 sec or so for at least 3 days. I then tried to restart >>> cups, and then got
    ....
    D [21/Jul/2020:09:13:11 -0700] [Job 521] Removing document files
    D [21/Jul/2020:09:13:11 -0700] [Job 522] Removing document files
    D [21/Jul/2020:09:13:11 -0700] [Job 523] Removing document files.
    ...
    going through almost all of the Job numbers from 1 to 540, about couple of minutes.

    Might I suggest bookmarking the following URL
    https://www.google.com/advanced_search

    putting cups in the first box
    and Removing document files
    in the second box, gets me
    About 342 results (0.35 seconds)

    Unfortunately had you looked at them you would have discovered none
    offered a solution.

    Well I will admit I did not bother doing any research for you.

    When I searched for
    cups repeated error messages removing document files
    I got 26,500,000 results,

    Then I suggest you did not follow my instructions. Tried it again.
    in box first box
    cups
    in second box
    Removing document files
    and get
    About 341 results (0.51 seconds)


    and one was a bug report on Ubuntu which
    mentioned the same problem I had but no fix from Jun 2019.
    with cups 2.1.3 (I have 2.1.13)

    Yup, and is why I would have picked the second search result I got.

    The bug is reported as confirmed.
    One notice says that running "cancel -x -a" which wipes everything from /var/spool/cups "solves" the problem,

    Which would have suggested to me to get into the cups web admin page and
    kill all jobs.

    but then that would me you would
    have to keep track of the state of the error_log file (before they fill
    the log space)

    In my stupid opinion any competent admin should have a cron job to
    monitor logs for errors.

    I have a hourly cron job to use xmessage to pop up any failures since
    the last time it ran.

    It seems dead easy to do parse df output and give you a warning.
    Something like

    #!/bin/bash
    PATH=/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin
    set -u
    Drive=""
    Line=""
    Used=0

    while read -r Line; do
    set -- $Line
    Drive=$1
    Used=$5
    if [ $Used -gt 80 ] ; then
    echo "WARNING: $Drive is $Used% full"
    fi
    done < <(df -h | grep '% /' | tr -d '%')

    Want an email instead, replace the echo line with
    mail -s "WARNING: $Drive is $Used% full" root < /dev/hull

    If you do find a solution, it would be nice if you posted it with
    [SOLUTION] in the Subject line.

    --- MBSE BBS v1.0.7.17 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From William Unruh@2:250/1 to All on Thu Jul 23 01:08:29 2020
    Subject: Re: cups flooding the var/log/cups/error_log with "erasing
    documents" errors

    On 2020-07-22, Bit Twister <BitTwister@mouse-potato.com> wrote:
    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA512

    Bob Tennent wrote:
    For years I've used port-forwarding in my router to allow
    external access to ftp, ssh, and http servers. Recently,
    this no longer works. The servers are running normally and
    can be accessed internally. I can even access them via the
    router IP address, showing that port-forwarding is still
    working. I've also tried port-fowarding and DMZ in the
    modem, to no avail. So my conjecture is that the ISP has
    decided to block packets it considers unworthy before they
    reach the modem.

    Is this at all likely? How can I test/confirm the
    conjecture? If it is confirmed, what can I do? Needless to
    say, ISP "technical support" has so far been useless.

    Just for giggles, is your gateway showing an IP address in RFC1918 or
    RFC6598 space? Also if you're using DNS, is the DNS pointing at the
    right IP address still (e.g. ISP hasn't handed out a new address instead
    of what you've had in the past).

    Just to amplify, can you ssh out to some server somewhere? Or you can go
    to
    https://portforward.com/networking/routers_ip_address.htm
    and at the bottom of the page will be listed your router's external IP
    address, the one you have to ssh/ftp/... to to get your port forwarding
    done.
    If, using that IP you get nothing from your router, make sure first that
    your router's firewall is not blocking incoming packets to the say port
    23 (ssh) .
    As He says, remember that the ISP reserves the right to change that IP
    address wihout warning. Thus you need some way of finding out that IP
    address constantly and letting those machines that need to contact yours
    what the IP address is of the router when it changes.
    If you have set up your system to go through a vpn, then there is no way
    that an outsider can get at your system. That is the purpose of a vpn.
    You can sometimes add those external systems that need to get through to
    you to the routing on your computer so that those addresses are not
    routed through the vpn, but go directly so you can contact them when
    your router ip address changes for them and change their own routing
    tables.



    --- MBSE BBS v1.0.7.17 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From William Unruh@2:250/1 to All on Thu Jul 23 02:33:35 2020
    Subject: Re: cups flooding the var/log/cups/error_log with "erasing
    documents" errors

    On 2020-07-22, Bit Twister <BitTwister@mouse-potato.com> wrote:
    On Wed, 22 Jul 2020 21:55:49 -0000 (UTC), William Unruh wrote:
    On 2020-07-22, Bit Twister <BitTwister@mouse-potato.com> wrote:
    On Tue, 21 Jul 2020 21:25:27 -0000 (UTC), William Unruh wrote:
    Mageia 7 updated. I have a cups server which prints to an attached
    printer. Looking at the error_log file (yes I print to a file rather
    than the rediculous journald logging system) I get repeated statements >>>> (after doing
    grep '\[Job' error_log)



    D [19/Jul/2020:03:54:48 -0700] [Job 309] Removing document files
    repeated every 30 sec or so for at least 3 days. I then tried to restart >>>> cups, and then got
    ....
    D [21/Jul/2020:09:13:11 -0700] [Job 521] Removing document files
    D [21/Jul/2020:09:13:11 -0700] [Job 522] Removing document files
    D [21/Jul/2020:09:13:11 -0700] [Job 523] Removing document files.
    ...
    going through almost all of the Job numbers from 1 to 540, about couple of minutes.

    Might I suggest bookmarking the following URL
    https://www.google.com/advanced_search

    putting cups in the first box
    and Removing document files
    in the second box, gets me
    About 342 results (0.35 seconds)

    Unfortunately had you looked at them you would have discovered none
    offered a solution.

    Well I will admit I did not bother doing any research for you.

    But simply telling someone to go to google search is not terribly
    helpful, especially as I had done that.


    When I searched for
    cups repeated error messages removing document files
    I got 26,500,000 results,

    Then I suggest you did not follow my instructions. Tried it again.
    in box first box
    cups
    in second box
    Removing document files
    and get
    About 341 results (0.51 seconds)


    and one was a bug report on Ubuntu which
    mentioned the same problem I had but no fix from Jun 2019.
    with cups 2.1.3 (I have 2.1.13)

    Yup, and is why I would have picked the second search result I got.

    I had seen that one, and then lost it as it did not seem to answer my
    question immediately. I have found it again and am trying testing
    his patch. I must say I am not impressed by the quality of the code that
    in that particular subroutine. There look to be lots of sloppy areas for
    bugs to creep in.



    The bug is reported as confirmed.
    One notice says that running "cancel -x -a" which wipes everything from
    /var/spool/cups "solves" the problem,

    Which would have suggested to me to get into the cups web admin page and
    kill all jobs.

    Note that as mentioned in my report, I did that. I erased all c00* and
    d00* files in /var/spool/cups/* but that seemed like quite a kludge.
    Most of the d00* had already been removed, and from the above fix that
    seems to have been the problem. What it looks like on a cursory
    examination of the code, if the d00* file, companion to an existing c00* file, does not exist, the program
    assumes that the file exists and has a creation time suffiently far in
    the past that it should be removed. It then reports that it is removing
    that file, but since it does not exist, it does not succeed in removing
    it, so it tries again a few seconds later.

    Note that in my case it never filled up the partition. I saw it because
    I was having other cups problems and was looking at the error_log file
    (which was on the debug level) for
    grep '\[Job' /var/log/cups/error_log
    to see what cups was doing with the print jobs (cups error_log tends to
    be so prolix that it is almost impossible to find useful information).




    but then that would me you would
    have to keep track of the state of the error_log file (before they fill
    the log space)

    In my stupid opinion any competent admin should have a cron job to
    monitor logs for errors.

    But they should not have to monitor them for error reports that are
    bugs.


    I have a hourly cron job to use xmessage to pop up any failures since
    the last time it ran.

    And this would not have popped up until the file grew so large it ate up
    all the space on the /var/log partition.


    It seems dead easy to do parse df output and give you a warning.
    Something like

    #!/bin/bash
    PATH=/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin
    set -u
    Drive=""
    Line=""
    Used=0

    while read -r Line; do
    set -- $Line
    Drive=$1
    Used=$5
    if [ $Used -gt 80 ] ; then
    echo "WARNING: $Drive is $Used% full"
    fi
    done < <(df -h | grep '% /' | tr -d '%')

    Want an email instead, replace the echo line with
    mail -s "WARNING: $Drive is $Used% full" root < /dev/hull

    If you do find a solution, it would be nice if you posted it with
    [SOLUTION] in the Subject line.

    --- MBSE BBS v1.0.7.17 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)
  • From Bit Twister@2:250/1 to All on Thu Jul 23 03:18:12 2020
    Subject: Re: cups flooding the var/log/cups/error_log with "erasing
    documents" errors

    On Thu, 23 Jul 2020 01:33:35 -0000 (UTC), William Unruh wrote:
    On 2020-07-22, Bit Twister <BitTwister@mouse-potato.com> wrote:


    Well I will admit I did not bother doing any research for you.

    But simply telling someone to go to google search is not terribly
    helpful, especially as I had done that.

    Hmmmm, I can not remember if you had indicated you had done so.
    In any case. My second search result showed they were aware of the
    bug, patch worked, so I provided the search suggestion.

    Not my fault you decided not to follow my instructions.


    Note that in my case it never filled up the partition. I saw it because
    I was having other cups problems and was looking at the error_log file
    (which was on the debug level) for
    grep '\[Job' /var/log/cups/error_log
    to see what cups was doing with the print jobs (cups error_log tends to
    be so prolix that it is almost impossible to find useful information).




    but then that would me you would
    have to keep track of the state of the error_log file (before they fill
    the log space)

    In my stupid opinion any competent admin should have a cron job to
    monitor logs for errors.

    But they should not have to monitor them for error reports that are
    bugs.


    I have a hourly cron job to use xmessage to pop up any failures since
    the last time it ran.

    And this would not have popped up until the file grew so large it ate up
    all the space on the /var/log partition.

    Look at my script again. You would get the warning when drive is more
    than 80% Used.

    As for you finding the problem while working another problem, I suggest
    to you a script watching for errors will alert you and give you a head start
    on running down the problem, hopefully before someone starts hollering.


    --- MBSE BBS v1.0.7.17 (GNU/Linux-x86_64)
    * Origin: A noiseless patient Spider (2:250/1@fidonet)