• DB2/LUW Tablespace Backup/Recovery Strategie concerning archived (reado

    From wolfgangbrandl7@gmail.com@21:1/5 to All on Tue Feb 21 05:15:03 2017
    Hi
    We have a database nearly 3 TByte big where most of the data (80%) is stored in tablespaces which hold "archival data". This archival data is just reorganized every at the beginning to hold data of the last year. The data is still used in joins or
    directly with a select in an transactions also handling the active data. Because the tables and indexes in this "archive" tablespaces is reorganized and read only and will never change again, I thought it would be good to backup this tablespaces only once in the year and for the other tablespaces I make a weekly online
    tablespace backup including logs and on the other days incremental online backup.
    I have simulated that for a system crash and for restoring some tablespaces into a different instance to get data back which could be destroyed of human errors etc..
    My question now is if I have also to save the archived logs regularly since the backups of the large archived tablespaces were performed?

    My example

    There are two tablespace which should simulate the big archived tablespaces: NSPACE and BSPACE
    The tablespaces for changing data are : SYSCATSPACE,USERSPACE1,URSPACE,MYSPACE I make the following backups once at beginning of the year
    T1 : backup database $DB tablespace (SYSCATSPACE,NSPACE) online to $BACKUPPATH include logs"
    T2 : backup database $DB tablespace (SYSCATSPACE,BSPACE) online to $BACKUPPATH include logs"

    I make a backup on Sunday every week:

    T3 : backup database $DB tablespace (SYSCATSPACE,USERSPACE1,URSPACE,MYSPACE) online to $BACKUPPATH include logs"
    I make an incremental backup every other day of the week:
    T4 : backup database $DB tablespace (SYSCATSPACE,USERSPACE1,URSPACE,MYSPACE) online incremental to $BACKUPPATH include logs


    Scenario 1:
    A crash happened and I have lost all disk spaces of the database also the archived log space.

    The Crash Recovery Crash recovery would be setup as follows:
    RESTORE DATABASE $DB REBUILD WITH all tablespaces in image INCREMENTAL FROM $BACKUPPATH TAKEN AT $T4 DBPATH ON '$CONTAINERPATH/metadata/$DB' LOGTARGET '$PWD/logretain' REDIRECT WITHOUT PROMPTING;
    SET STOGROUP PATHS FOR IBMSTOGROUP ON '$CONTAINERPATH/tablespace/$DB';
    SET STOGROUP PATHS FOR TS_U ON '$CONTAINERPATH/TS_U_SPACE/$DB';
    SET STOGROUP PATHS FOR TS_M ON '$CONTAINERPATH/TS_M_SPACE/$DB';
    SET STOGROUP PATHS FOR TS_B ON '$CONTAINERPATH/TS_B_SPACE/$DB';
    SET STOGROUP PATHS FOR TS_N ON '$CONTAINERPATH/TS_N_SPACE/$DB';
    RESTORE DATABASE $DB CONTINUE;
    RESTORE DATABASE $DB incremental FROM $BACKUPPATH TAKEN AT $T3 LOGTARGET '$PWD/logretain' WITHOUT PROMPTING;
    RESTORE DATABASE $DB incremental FROM $BACKUPPATH TAKEN AT $T4 LOGTARGET '$PWD/logretain' WITHOUT PROMPTING;
    rollforward database $DB to end of logs and stop overflow log path ('$PWD/logretain');
    ! rm -rf $PWD/logretain/* ;
    RESTORE DATABASE $DB tablespace (BSPACE) FROM $BACKUPPATH TAKEN AT T2 LOGTARGET '$PWD/logretain' WITHOUT PROMPTING;
    rollforward database $DB to end of logs and stop overflow log path ('$PWD/logretain');
    RESTORE DATABASE $DB tablespace (NSPACE) FROM $BACKUPPATH TAKEN AT T1 LOGTARGET '$PWD/logretain' WITHOUT PROMPTING;
    rollforward database $DB to end of logs and stop overflow log path ('$PWD/logretain');


    Szenario 2:
    Some human error happened, a bad not well tested batch destroyed some data or something else. Because the orginal DB is online I have to restore into a different database and in my szenario in a different instance. So I have to redirect into other disk
    storage.

    RESTORE DATABASE $DB REBUILD WITH all tablespaces in image INCREMENTAL FROM $BACKUPPATH TAKEN AT T4 ON '$REPLACEPATH/tablespace/$DB' DBPATH ON '$REPLACEPATH/metadata/$DB' LOGTARGET '$PWD/logretain' REDIRECT WITHOUT PROMPTING;

    SET STOGROUP PATHS FOR IBMSTOGROUP ON '$REPLACEPATH/tablespace/$DB';

    SET STOGROUP PATHS FOR TS_U ON '$REPLACEPATH/TS_U_SPACE/$DB';

    SET STOGROUP PATHS FOR TS_M ON '$REPLACEPATH/TS_M_SPACE/$DB';

    SET STOGROUP PATHS FOR TS_B ON '$REPLACEPATH/TS_B_SPACE/$DB';

    SET STOGROUP PATHS FOR TS_N ON '$REPLACEPATH/TS_N_SPACE/$DB';

    RESTORE DATABASE $DB CONTINUE;

    RESTORE DATABASE $DB incremental FROM $BACKUPPATH TAKEN AT T3 LOGTARGET '$PWD/logretain' WITHOUT PROMPTING;

    RESTORE DATABASE $DB incremental FROM $BACKUPPATH TAKEN AT T4 LOGTARGET '$PWD/logretain' WITHOUT PROMPTING;

    rollforward database $DB to end of logs and stop overflow log path ('$PWD/logretain')";

    connect to $DB;

    connect reset;

    RESTORE DATABASE $DB tablespace (BSPACE) FROM $BACKUPPATH TAKEN AT T2 LOGTARGET '$PWD/logretain' WITHOUT PROMPTING;

    rollforward database $DB to end of backup and stop tablespace(BSPACE) overflow log path ('$PWD/logretain') ;



    RESTORE DATABASE $DB tablespace FROM $BACKUPPATH TAKEN AT T1 LOGTARGET '$PWD/logretain' WITHOUT PROMPTING;

    rollforward database $DB to end of backup and stop tablespace(NSPACE) overflow log path ('$PWD/logretain') ;

    update db cfg for $DB using NEWLOGPATH $REPLACEPATH/log/$DB





    I have tried several times and it worked. What I cannot test the duration of a full year without saving the archived log files. Could I run into problems which I didn't consider or thought about?


    Regards
    Wolfgang

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jeremy Rickard@21:1/5 to wolfgan...@gmail.com on Sun Feb 26 19:04:40 2023
    On Wednesday, 22 February 2017 at 02:15:04 UTC+13, wolfgan...@gmail.com wrote:
    Hi
    We have a database nearly 3 TByte big where most of the data (80%) is stored in tablespaces which hold "archival data". This archival data is just reorganized every at the beginning to hold data of the last year. The data is still used in joins or
    directly with a select in an transactions also handling the active data.
    Because the tables and indexes in this "archive" tablespaces is reorganized and read only and will never change again, I thought it would be good to backup this tablespaces only once in the year and for the other tablespaces I make a weekly online
    tablespace backup including logs and on the other days incremental online backup.
    I have simulated that for a system crash and for restoring some tablespaces into a different instance to get data back which could be destroyed of human errors etc..
    My question now is if I have also to save the archived logs regularly since the backups of the large archived tablespaces were performed?
    [test detail snipped]

    What a great question this was! Shame there were no answers. Still, 6 years and a week late, not so bad...

    The answer is that if you only need to recover the archive tablespaces to end of backup (way back) then you should not need archive logs beyond that PIT - provided that those tablespaces are self-consistent (e.g. there are no tables in these tablespaces
    that use other tablespaces with a different PIT for indexes or partitions). So it would be sensible to test what happens if somebody created such an object at a later date (without checking, I believe this would block recovery). It might also be sensible
    to INCLUDE LOGS in those archive backups, to mitigate any risk that they are removed after a certain age.

    Jeremy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)