• What to do when you have 100s of spool files to delete?

    From vishal.gt709@gmail.com@21:1/5 to Mr. K.V.B.L. on Fri Jan 4 04:36:59 2019
    On Thursday, July 1, 2010 at 1:05:18 AM UTC+5:30, Mr. K.V.B.L. wrote:
    or a 1000!

    Now, Rdi is also an option to delete spool files.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Salty Salt@21:1/5 to vishal.gt709@gmail.com on Sun Jan 6 22:40:39 2019
    On 1/4/19 6:36 AM, vishal.gt709@gmail.com wrote:
    On Thursday, July 1, 2010 at 1:05:18 AM UTC+5:30, Mr. K.V.B.L. wrote:
    or a 1000!

    Now, Rdi is also an option to delete spool files.


    Depends on the situation... DLTSPLF command with SELECT keyword might
    help you out.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jonathan Ball@21:1/5 to Salty Salt on Sat Jan 19 16:01:26 2019
    On 1/6/2019 8:40 PM, Salty Salt wrote:
    On 1/4/19 6:36 AM, vishal.gt709@gmail.com wrote:
    On Thursday, July 1, 2010 at 1:05:18 AM UTC+5:30, Mr. K.V.B.L. wrote:
    or a 1000!

    Now, Rdi is also an option to delete spool files.


    Depends on the situation...  DLTSPLF command with SELECT keyword might help you out.

    Always kind of amused when someone responds to a years-old post, but then again, stuff changes in the interim.

    Consider writing an SQL script and look at the IBM view QSYS2.OUTPUT_QUEUE_ENTRIES. This contains all relevant data for every
    spool file in every output queue on the system. I've written scripts to delete, say, spool files older than six months for a particular user:


    begin
    for splfs cursor for
    select *
    from qsys2.output_queue_entries
    where user_name = 'BSMITH'
    and date(create_timestamp) < current date - 6 months
    do
    call qcmdexc ('dltsplf ' || spooled_file_name || ' job(' || job_name ||
    ') splnbr(' || char(file_number) || ')');
    end for;
    end


    You could add all kinds of criteria to the WHERE clause of the cursor, including multiple users, specific user data, minimum page size, etc.

    You can also run an ad hoc query to find out which the most spool file data
    by count or by size:

    select user_name, count(*) splfs, sum(bigint(size)) tot_size
    from qsys2.output_queue_entries
    group by user_name
    order by 2 desc

    Ordering by the descending count will tell you who has the most, by
    descending total size who is consuming the most space. You could add a
    WHERE clause that would allow further refinement.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rramirezphx18@gmail.com@21:1/5 to Mr. K.V.B.L. on Wed Nov 20 13:41:10 2019
    On Wednesday, June 30, 2010 at 2:35:18 PM UTC-5, Mr. K.V.B.L. wrote:
    or a 1000!

    This is an old thread but hopefully this will help someone.

    I wrestled with the same problem, hundreds thousands of splf to delete, could not do a CLR.

    Tried iNav it would barf if you selected a large batch to delete, no better than copy / paste a column of 4s and ctrl V it.

    Then a brainstorm... I use Mocha, most all emulators can do this...

    Macro recording start, do a batch of column of 4 deletes, page down, more, etc. However many you want to do. End recording macro, assign to a key, boom, you have cut your keystrokes way back.

    Caution, it can get out of control when you near the end of the ones you want to delete because its like dropping a big bomb and trashing everything, so be careful. Hope this helps.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jonathan Ball@21:1/5 to rramirezphx18@gmail.com on Wed Nov 20 22:57:56 2019
    On 11/20/2019 1:41 PM, rramirezphx18@gmail.com wrote:
    On Wednesday, June 30, 2010 at 2:35:18 PM UTC-5, Mr. K.V.B.L. wrote:
    or a 1000!

    This is an old thread but hopefully this will help someone.

    I wrestled with the same problem, hundreds thousands of splf to delete, could not do a CLR.

    Tried iNav it would barf if you selected a large batch to delete, no better than copy / paste a column of 4s and ctrl V it.

    Then a brainstorm... I use Mocha, most all emulators can do this...

    Macro recording start, do a batch of column of 4 deletes, page down, more, etc. However many you want to do. End recording macro, assign to a key, boom, you have cut your keystrokes way back.

    Caution, it can get out of control when you near the end of the ones you want to delete because its like dropping a big bomb and trashing everything, so be careful. Hope this helps.


    I haven't used them, but there are utilities in TAATOOLS that will delete
    spool files based on various criteria.

    If you really want to get creative, you can write a SQL script that will
    allow you to delete multiple spool files for multiple users from multiple output queues according to almost as many criteria as you can imagine.
    There is an IBM-supplied view, QSYS2.OUTPUT_QUEUE_ENTRIES. This has
    entries for every spool file on the system. Start by running a simple
    query to see what it yields:

    select *
    from qsys2.output_queue_entries

    Most of the spool file attributes found in WRKSPLFA are available as
    columns in the result set.

    Once you're familiar with the columns available in OUTPUT_QUEUE_ENTRIES,
    you can write a script to delete spool files according to whatever criteria
    you wish.

    begin
    for splfs cursor for
    select *
    from qsys2.output_queue_entries
    where user_name = 'BSMITH'
    and date(create_timestamp) < current date - 6 months
    do
    call qcmdexc ('dltsplf ' || spooled_file_name || ' job(' ||
    job_name || ') splnbr(' || char(file_number) || ')');
    end for;
    end;

    The dynamic creation of the DLTSPLF command to be passed to QCMDEXC looks awkward, but it shouldn't be too cumbersome for anyone who has done a lot
    of concatenation in CL. The job name is already a fully qualified job name
    of the form jobnbr/jobuser/jobname, so you don't have to string it together from separate columns.

    You could add all kinds of criteria to the WHERE clause of the cursor, including multiple users, job queues, specific user data, minimum page
    size, etc.

    You can also run an /ad hoc/ query to find out which user has the most
    spool file data by count or by size:

    select user_name, count(*) splfs, sum(bigint(size)) tot_size
    from qsys2.output_queue_entries
    group by user_name
    order by 2 desc

    Ordering by the descending count will tell you who has the most, by
    descending total size who is consuming the most space. You could add a
    WHERE clause that would allow further refinement.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From mcalabrese-consultant@scholastic.co@21:1/5 to Jonathan Ball on Wed Mar 18 10:36:03 2020
    On Thursday, November 21, 2019 at 1:58:01 AM UTC-5, Jonathan Ball wrote:
    On 11/20/2019 1:41 PM, rramirezphx18@gmail.com wrote:
    On Wednesday, June 30, 2010 at 2:35:18 PM UTC-5, Mr. K.V.B.L. wrote:
    or a 1000!

    This is an old thread but hopefully this will help someone.

    I wrestled with the same problem, hundreds thousands of splf to delete, could not do a CLR.

    Tried iNav it would barf if you selected a large batch to delete, no better than copy / paste a column of 4s and ctrl V it.

    Then a brainstorm... I use Mocha, most all emulators can do this...

    Macro recording start, do a batch of column of 4 deletes, page down, more, etc. However many you want to do. End recording macro, assign to a key, boom, you have cut your keystrokes way back.

    Caution, it can get out of control when you near the end of the ones you want to delete because its like dropping a big bomb and trashing everything, so be careful. Hope this helps.


    I haven't used them, but there are utilities in TAATOOLS that will delete spool files based on various criteria.

    If you really want to get creative, you can write a SQL script that will allow you to delete multiple spool files for multiple users from multiple output queues according to almost as many criteria as you can imagine.
    There is an IBM-supplied view, QSYS2.OUTPUT_QUEUE_ENTRIES. This has
    entries for every spool file on the system. Start by running a simple
    query to see what it yields:

    select *
    from qsys2.output_queue_entries

    Most of the spool file attributes found in WRKSPLFA are available as
    columns in the result set.

    Once you're familiar with the columns available in OUTPUT_QUEUE_ENTRIES,
    you can write a script to delete spool files according to whatever criteria you wish.

    begin
    for splfs cursor for
    select *
    from qsys2.output_queue_entries
    where user_name = 'BSMITH'
    and date(create_timestamp) < current date - 6 months
    do
    call qcmdexc ('dltsplf ' || spooled_file_name || ' job(' ||
    job_name || ') splnbr(' || char(file_number) || ')');
    end for;
    end;

    The dynamic creation of the DLTSPLF command to be passed to QCMDEXC looks awkward, but it shouldn't be too cumbersome for anyone who has done a lot
    of concatenation in CL. The job name is already a fully qualified job name of the form jobnbr/jobuser/jobname, so you don't have to string it together from separate columns.

    You could add all kinds of criteria to the WHERE clause of the cursor, including multiple users, job queues, specific user data, minimum page
    size, etc.

    You can also run an /ad hoc/ query to find out which user has the most
    spool file data by count or by size:

    select user_name, count(*) splfs, sum(bigint(size)) tot_size
    from qsys2.output_queue_entries
    group by user_name
    order by 2 desc

    Ordering by the descending count will tell you who has the most, by descending total size who is consuming the most space. You could add a
    WHERE clause that would allow further refinement.

    Awesome! THIS was my answer! Thank you!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)