Mageia 7 updated. I have a cups server which prints to an attached
printer. Looking at the error_log file (yes I print to a file rather
than the rediculous journald logging system) I get repeated statements
(after doing
grep '\[Job' error_log)
D [19/Jul/2020:03:54:48 -0700] [Job 309] Removing document filesminutes.
repeated every 30 sec or so for at least 3 days. I then tried to restart cups, and then got
....
D [21/Jul/2020:09:13:11 -0700] [Job 521] Removing document files
D [21/Jul/2020:09:13:11 -0700] [Job 522] Removing document files
D [21/Jul/2020:09:13:11 -0700] [Job 523] Removing document files.
...
going through almost all of the Job numbers from 1 to 540, about couple of
On Tue, 21 Jul 2020 21:25:27 -0000 (UTC), William Unruh wrote:
Mageia 7 updated. I have a cups server which prints to an attached
printer. Looking at the error_log file (yes I print to a file rather
than the rediculous journald logging system) I get repeated statements
(after doing
grep '\[Job' error_log)
D [19/Jul/2020:03:54:48 -0700] [Job 309] Removing document files
repeated every 30 sec or so for at least 3 days. I then tried to restart
cups, and then got
....
D [21/Jul/2020:09:13:11 -0700] [Job 521] Removing document files
D [21/Jul/2020:09:13:11 -0700] [Job 522] Removing document files
D [21/Jul/2020:09:13:11 -0700] [Job 523] Removing document files.
...
going through almost all of the Job numbers from 1 to 540, about couple of minutes.
Might I suggest bookmarking the following URL
https://www.google.com/advanced_search
putting cups in the first box
and Removing document files
in the second box, gets me
About 342 results (0.35 seconds)
If you do find a solution, it would be nice if you posted it with
[SOLUTION] in the Subject line.
On 2020-07-22, Bit Twister <BitTwister@mouse-potato.com> wrote:
On Tue, 21 Jul 2020 21:25:27 -0000 (UTC), William Unruh wrote:
Mageia 7 updated. I have a cups server which prints to an attached
printer. Looking at the error_log file (yes I print to a file rather
than the rediculous journald logging system) I get repeated statements
(after doing
grep '\[Job' error_log)
D [19/Jul/2020:03:54:48 -0700] [Job 309] Removing document files
repeated every 30 sec or so for at least 3 days. I then tried to restart >>> cups, and then got
....
D [21/Jul/2020:09:13:11 -0700] [Job 521] Removing document files
D [21/Jul/2020:09:13:11 -0700] [Job 522] Removing document files
D [21/Jul/2020:09:13:11 -0700] [Job 523] Removing document files.
...
going through almost all of the Job numbers from 1 to 540, about couple of minutes.
Might I suggest bookmarking the following URL
https://www.google.com/advanced_search
putting cups in the first box
and Removing document files
in the second box, gets me
About 342 results (0.35 seconds)
Unfortunately had you looked at them you would have discovered none
offered a solution.
When I searched for
cups repeated error messages removing document files
I got 26,500,000 results,
and one was a bug report on Ubuntu which
mentioned the same problem I had but no fix from Jun 2019.
with cups 2.1.3 (I have 2.1.13)
The bug is reported as confirmed.
One notice says that running "cancel -x -a" which wipes everything from /var/spool/cups "solves" the problem,
but then that would me you would
have to keep track of the state of the error_log file (before they fill
the log space)
If you do find a solution, it would be nice if you posted it with
[SOLUTION] in the Subject line.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Bob Tennent wrote:
For years I've used port-forwarding in my router to allow
external access to ftp, ssh, and http servers. Recently,
this no longer works. The servers are running normally and
can be accessed internally. I can even access them via the
router IP address, showing that port-forwarding is still
working. I've also tried port-fowarding and DMZ in the
modem, to no avail. So my conjecture is that the ISP has
decided to block packets it considers unworthy before they
reach the modem.
Is this at all likely? How can I test/confirm the
conjecture? If it is confirmed, what can I do? Needless to
say, ISP "technical support" has so far been useless.
Just for giggles, is your gateway showing an IP address in RFC1918 or
RFC6598 space? Also if you're using DNS, is the DNS pointing at the
right IP address still (e.g. ISP hasn't handed out a new address instead
of what you've had in the past).
On Wed, 22 Jul 2020 21:55:49 -0000 (UTC), William Unruh wrote:
On 2020-07-22, Bit Twister <BitTwister@mouse-potato.com> wrote:
On Tue, 21 Jul 2020 21:25:27 -0000 (UTC), William Unruh wrote:
Mageia 7 updated. I have a cups server which prints to an attached
printer. Looking at the error_log file (yes I print to a file rather
than the rediculous journald logging system) I get repeated statements >>>> (after doing
grep '\[Job' error_log)
D [19/Jul/2020:03:54:48 -0700] [Job 309] Removing document files
repeated every 30 sec or so for at least 3 days. I then tried to restart >>>> cups, and then got
....
D [21/Jul/2020:09:13:11 -0700] [Job 521] Removing document files
D [21/Jul/2020:09:13:11 -0700] [Job 522] Removing document files
D [21/Jul/2020:09:13:11 -0700] [Job 523] Removing document files.
...
going through almost all of the Job numbers from 1 to 540, about couple of minutes.
Might I suggest bookmarking the following URL
https://www.google.com/advanced_search
putting cups in the first box
and Removing document files
in the second box, gets me
About 342 results (0.35 seconds)
Unfortunately had you looked at them you would have discovered none
offered a solution.
Well I will admit I did not bother doing any research for you.
When I searched for
cups repeated error messages removing document files
I got 26,500,000 results,
Then I suggest you did not follow my instructions. Tried it again.
in box first box
cups
in second box
Removing document files
and get
About 341 results (0.51 seconds)
and one was a bug report on Ubuntu which
mentioned the same problem I had but no fix from Jun 2019.
with cups 2.1.3 (I have 2.1.13)
Yup, and is why I would have picked the second search result I got.
The bug is reported as confirmed.
One notice says that running "cancel -x -a" which wipes everything from
/var/spool/cups "solves" the problem,
Which would have suggested to me to get into the cups web admin page and
kill all jobs.
but then that would me you would
have to keep track of the state of the error_log file (before they fill
the log space)
In my stupid opinion any competent admin should have a cron job to
monitor logs for errors.
I have a hourly cron job to use xmessage to pop up any failures since
the last time it ran.
It seems dead easy to do parse df output and give you a warning.
Something like
#!/bin/bash
PATH=/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin
set -u
Drive=""
Line=""
Used=0
while read -r Line; do
set -- $Line
Drive=$1
Used=$5
if [ $Used -gt 80 ] ; then
echo "WARNING: $Drive is $Used% full"
fi
done < <(df -h | grep '% /' | tr -d '%')
Want an email instead, replace the echo line with
mail -s "WARNING: $Drive is $Used% full" root < /dev/hull
If you do find a solution, it would be nice if you posted it with
[SOLUTION] in the Subject line.
On 2020-07-22, Bit Twister <BitTwister@mouse-potato.com> wrote:
Well I will admit I did not bother doing any research for you.
But simply telling someone to go to google search is not terribly
helpful, especially as I had done that.
Note that in my case it never filled up the partition. I saw it because
I was having other cups problems and was looking at the error_log file
(which was on the debug level) for
grep '\[Job' /var/log/cups/error_log
to see what cups was doing with the print jobs (cups error_log tends to
be so prolix that it is almost impossible to find useful information).
but then that would me you would
have to keep track of the state of the error_log file (before they fill
the log space)
In my stupid opinion any competent admin should have a cron job to
monitor logs for errors.
But they should not have to monitor them for error reports that are
bugs.
I have a hourly cron job to use xmessage to pop up any failures since
the last time it ran.
And this would not have popped up until the file grew so large it ate up
all the space on the /var/log partition.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (2 / 14) |
Uptime: | 81:30:52 |
Calls: | 6,658 |
Calls today: | 4 |
Files: | 12,203 |
Messages: | 5,333,316 |
Posted today: | 1 |