or a 1000!
On Thursday, July 1, 2010 at 1:05:18 AM UTC+5:30, Mr. K.V.B.L. wrote:
or a 1000!
Now, Rdi is also an option to delete spool files.
On 1/4/19 6:36 AM, vishal.gt709@gmail.com wrote:
On Thursday, July 1, 2010 at 1:05:18 AM UTC+5:30, Mr. K.V.B.L. wrote:
or a 1000!
Now, Rdi is also an option to delete spool files.
Depends on the situation... DLTSPLF command with SELECT keyword might help you out.
or a 1000!
On Wednesday, June 30, 2010 at 2:35:18 PM UTC-5, Mr. K.V.B.L. wrote:
or a 1000!
This is an old thread but hopefully this will help someone.
I wrestled with the same problem, hundreds thousands of splf to delete, could not do a CLR.
Tried iNav it would barf if you selected a large batch to delete, no better than copy / paste a column of 4s and ctrl V it.
Then a brainstorm... I use Mocha, most all emulators can do this...
Macro recording start, do a batch of column of 4 deletes, page down, more, etc. However many you want to do. End recording macro, assign to a key, boom, you have cut your keystrokes way back.
Caution, it can get out of control when you near the end of the ones you want to delete because its like dropping a big bomb and trashing everything, so be careful. Hope this helps.
On 11/20/2019 1:41 PM, rramirezphx18@gmail.com wrote:
On Wednesday, June 30, 2010 at 2:35:18 PM UTC-5, Mr. K.V.B.L. wrote:
or a 1000!
This is an old thread but hopefully this will help someone.
I wrestled with the same problem, hundreds thousands of splf to delete, could not do a CLR.
Tried iNav it would barf if you selected a large batch to delete, no better than copy / paste a column of 4s and ctrl V it.
Then a brainstorm... I use Mocha, most all emulators can do this...
Macro recording start, do a batch of column of 4 deletes, page down, more, etc. However many you want to do. End recording macro, assign to a key, boom, you have cut your keystrokes way back.
Caution, it can get out of control when you near the end of the ones you want to delete because its like dropping a big bomb and trashing everything, so be careful. Hope this helps.
I haven't used them, but there are utilities in TAATOOLS that will delete spool files based on various criteria.
If you really want to get creative, you can write a SQL script that will allow you to delete multiple spool files for multiple users from multiple output queues according to almost as many criteria as you can imagine.
There is an IBM-supplied view, QSYS2.OUTPUT_QUEUE_ENTRIES. This has
entries for every spool file on the system. Start by running a simple
query to see what it yields:
select *
from qsys2.output_queue_entries
Most of the spool file attributes found in WRKSPLFA are available as
columns in the result set.
Once you're familiar with the columns available in OUTPUT_QUEUE_ENTRIES,
you can write a script to delete spool files according to whatever criteria you wish.
begin
for splfs cursor for
select *
from qsys2.output_queue_entries
where user_name = 'BSMITH'
and date(create_timestamp) < current date - 6 months
do
call qcmdexc ('dltsplf ' || spooled_file_name || ' job(' ||
job_name || ') splnbr(' || char(file_number) || ')');
end for;
end;
The dynamic creation of the DLTSPLF command to be passed to QCMDEXC looks awkward, but it shouldn't be too cumbersome for anyone who has done a lot
of concatenation in CL. The job name is already a fully qualified job name of the form jobnbr/jobuser/jobname, so you don't have to string it together from separate columns.
You could add all kinds of criteria to the WHERE clause of the cursor, including multiple users, job queues, specific user data, minimum page
size, etc.
You can also run an /ad hoc/ query to find out which user has the most
spool file data by count or by size:
select user_name, count(*) splfs, sum(bigint(size)) tot_size
from qsys2.output_queue_entries
group by user_name
order by 2 desc
Ordering by the descending count will tell you who has the most, by descending total size who is consuming the most space. You could add a
WHERE clause that would allow further refinement.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (2 / 14) |
Uptime: | 61:43:24 |
Calls: | 6,654 |
Files: | 12,200 |
Messages: | 5,331,534 |