Results 1 to 5 of 5
  1. #1
    Join Date
    Jun 2006

    Unanswered: Kindly have a comment on my database recovery policy!!


    I have inherited some Db2 instances ( V8.1.3)on AIX 5.2 in my new company. In all these instances , following parameters are set:


    My previous DB2 Admin , was running a script ( which according to his documentation ) can be executed anytime ( daily or every two days ) depending used space in /home/db2inst1 filesystem.

    This script simply searches for first active log file and then execute " prune logfile prior to " that first active file.

    I have following concerns in my mind :

    1. Under these circumtances , archiving is not on ( as userexit=off).What problems we may face in case of any problem with disk storage (at present same disk contains data and active log files i.e default /home/db2inst1 file system )

    2. Should i immediately start archiving by configuring userexit ON ( to another disk on same server )??
    If i do so ,then what mechanism should i use to remove older archive log files from that disk ( as if i do not remove archive log files at all , disk space utilization will grow up substantially ).

    Kindly advice on the same!!!



  2. #2
    Join Date
    May 2003
    I would seriously consider upgrading to 8.2 (FP 12 is the latest and very stable) for improved log archiving and the many other enahancements and bug fixes.

    In 8.2 there is an option called LOGARCHMETH1 that specificies the path were you want log archives to go. No user exit program is necessary. You just specifiy the path as follows:

    db2 update db cfg using LOGARCHMETH1 DISK:/your_log_archive_path_name

    You can also use TSM: to integrate it with TSM backup of the archive logs.

    In the meantime, you may want to see the 8.1 Data Recovery and High Availability Guide and Reference for more information about archive logging. for more information about log archiving in V8.1.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  3. #3
    Join Date
    Jun 2006
    Hi Marcus

    Many thanks for your quick reply .

    We have some application compatbility issues with DB2 8.2. Our application vendor is already working on these issues but it may take 4-6 months to resolve all them and test application with latest version of Db2

    Meanwhile , i could not take any risk ; what was my earlier successor was doing was his responsibility but now it is mine to ensure for correct way of having database recovery procedures in place/

    I hope that u guys will understand the situation and advice with your best experiences

    Thanks in advance


  4. #4
    Join Date
    Aug 2001
    It makes perfect sense to use userexit ...

    If you are using the default DB2 provided userexit, deletion of archived logs is taken care ..

    Pruning the logs 'blindly' will cause the previous online backups obsolete ... You should have log files atleast as old as the oldest online backup you would want to use ...

    If you are archiving to disk, then make sure you copy them over to a tape device before your archive log filesystem gets filled up ... Otherwise, archiving will fail causing the active log to increase and your applications finally failing


    Visit the new-look IDUG Website , register to gain access to the excellent content.

  5. #5
    Join Date
    Mar 2004
    Hi Jones,

    Being there (where you are now), so can understand your situation.

    As you have indicated, you have 2 options - either turn on userexit and archive to another directory/disk and do the cleanup there or continue using the existing solution.

    I would highly recommend the userexit option.
    You don't need AIX/IBM C compiler, you can even compile the userexit with the gnu compiler.

    Once you have the userexit set and logs are archiving, you can backup and then delete the log files using the "fuser" program in UNIX/AIX.
    Before you backup/delete the archived log file, use fuser on that log file. If it shows that the file is in use, do not backup/delete it. Alternatively, you can use the "find" command to backup/delete older files (e.g. with -mtime +1) .
    I am assuming that you have a mechanism to backup your files before deleting it.

    If you choose to go with the existing solution, then here's a high level of things you may want to do -

    Get the value for your logprimary (get db cfg for <db>)
    Get the value for the number of files in your log directory (ls -l <dir>)
    Ensure that you are not using any secondary log files currently (get snapshot for db <db>)
    If the number of log files in your log directory is greater than the number of primary logs + current secondary log files - then you have old log files that can be archived.
    Now prune log files from current active log file - 1.

    Here's a function that I used (as part of a large shell script) to do the second option above -
    #================================================= =====
    function prune_log_files
    #================================================= =====


    # As a built-in safety valve, we want to ensure that the
    # the number of log files to retain is NEVER less than 5.
    if [ -z "$COUNT_OF_LOG_FILES_TO_RETAIN" -o \
    print_error_message "Number of log files to retain ($COUNT_OF_LOG_FILES_TO_RETAIN) is either NULL or less than 5"

    # As another built-in safety valve, we want to ensure that the
    # the number of log files to retain is NEVER less than LOGPRIMARY.
    LOGPRIMARY="`db2 get db cfg for $DB2DB | grep LOGPRIMARY | cut -f2 -d'=' | sed 's/^ *//'`"

    print_error_message "Number of log files to retain ($COUNT_OF_LOG_FILES_TO_RETAIN) is less than LOGPRIMARY ($LOGPRIMARY) for database $DB2DB"

    # Now check if the number of files in the log directory
    # is greater than the number of files to retain.
    # This is necessary to check because the "head -N" command that
    # we will use later returns lines/files even when the number
    # of lines/files is less than N !!
    # So we need to ensure that the directory has atleast the minumum
    # number of log files before we embark on pruning.
    LOG_FILE_DIR="`db2 get db cfg for $DB2DB | grep 'Path to log files' | cut -f2 -d'=' | sed 's/^ *//'`"

    if [[ ! -d $LOG_FILE_DIR ]]
    print_error_message "Log file directory is ($LOG_FILE_DIR)is either not a directory or is non-existent"

    CURR_LOG_FILE_COUNT="`ls -1 $LOG_FILE_DIR | wc -l | sed 's/^ *//'`"

    print_error_message "Current no. of log files ($CURR_LOG_FILE_COUNT) is less than $COUNT_OF_LOG_FILES_TO_RETAIN"

    # Now (atlast!!) perform the actual prunning.
    # But wait, you need to know the qualifying log file first !
    # That log file will be the "oldest" log file (atleast in log number)
    # which will be the lowest number - so use "ls -1r'

    echo "\n\nLog files in the log directory BEFORE pruning...\n"
    ls -1l $LOG_FILE_DIR
    db2 prune logfile prior to $LAST_LOG_FILE_TO_RETAIN
    echo "\n\nLog files in the log directory AFTER pruning...\n"
    ls -1l $LOG_FILE_DIR


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts