Results 1 to 8 of 8
  1. #1
    Join Date
    Aug 2016
    Posts
    5

    Unanswered: Notloggedinitially - hadr

    Hi,

    We are using DB2 v9.7.9 and recently started facing a problem with DB purge activity since we switched to HADR.
    Actually we are altering target tables using NOT LOG INITIALLY in prod and creating backup tables to ensure zero loss in case if something wrong happens. Moreover, we are using 'LOAD' option for a cursor with NONRECOVERABLE.

    Click image for larger version. 

Name:	abc.JPG 
Views:	8 
Size:	54.1 KB 
ID:	17083

    Can you please suggest what to do next. Simple delete is taking a lot of time due to huge data available in table moreover leading to higher transaction logs.

  2. #2
    Join Date
    Apr 2012
    Posts
    1,006
    Provided Answers: 16
    Not Logged Initially is incompatible with HADR.
    Consider an alternative method for purging content
    For example, using range-partitioned tables (rollout old ranges does not use delete) if your DB2 licence allows it.

  3. #3
    Join Date
    Jul 2016
    Location
    Moscow
    Posts
    68
    Provided Answers: 7
    Hi,

    What's the result of the following command?
    Code:
    db2 get db cfg for mydb | grep BLOCKNONLOGGED
    If it's ON, then such a behavior is expected - you intentionally blocked such operations.
    Regards,
    Mark.

  4. #4
    Join Date
    Aug 2016
    Posts
    5
    Quote Originally Posted by mark.bb View Post
    Hi,

    What's the result of the following command?
    Code:
    db2 get db cfg for mydb | grep BLOCKNONLOGGED
    If it's ON, then such a behavior is expected - you intentionally blocked such operations.
    ---------------------------------------------------------

    Actually switching it on was an ultimate need.

    Is there any approach apart from what suggested above (partitioning), there are some model restrictions so can't be partitioned.

  5. #5
    Join Date
    Apr 2012
    Posts
    1,006
    Provided Answers: 16
    Permanently running scavenger threads that have a low commitcount for plain delete can spread the purge overhead over a much longer time without saturating the HADR link. Still the same amount of logging however spread over a much longer time period. Might not suit some applications.

    If occasional bulk purges are the design, within some maintenance window, where a high percentage of total rows of a table are to be removed, consider copying the smaller percentage (rows-to-preserve) to another table, using a load-replace to put them back - and additionally ensure that the load has copy yes suitably configured, and the HADR standbys configured to replay the load . Might be suitable for some scenarios when more than 50% of rows are being purged in a maintenance window.

  6. #6
    Join Date
    Jul 2016
    Location
    Moscow
    Posts
    68
    Provided Answers: 7
    Use LOAD ... COPY YES for loading.
    Do you need to delete a subset of data during data purging or all table rows?
    Regards,
    Mark.

  7. #7
    Join Date
    Aug 2016
    Posts
    5
    But Mark I think Load....copy yes is incompatible too with HADR.

    Actually we have to delete rows based on some age criterion (or I can say retention policy)... So earlier we were using not logged initially clause while creating duplicate tables of actual target tables to escape unwanted logging. Later loading it into cursor using NONRECOVERABLE option

    But since HADR is implemented it is opposing the two clauses. Now here moving on need a reliable and HADR compatible purging mechanism

  8. #8
    Join Date
    Jul 2016
    Location
    Moscow
    Posts
    68
    Provided Answers: 7
    LOAD COPY YES is compatible with BLOCKNONLOGGED.
    But you need to be careful with such a load - if standby is not able to find a corresponding load copy file, the tablespace where this table resides will be placed in the "restore pending" state and you will have to restore whole database on standby to repair this tablespace.

    Can you describe your retention policy in more detail and why can't you use table partitioning for this?
    Regards,
    Mark.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •