Results 1 to 12 of 12
  1. #1
    Join Date
    Nov 2007
    Posts
    36

    Unanswered: Delete is taking 3 hrs for a table of 60k records

    Hi,

    When I'm deleting data from 20 tables, one table is taking 3 hours to delete 60k records. while the other tables with more than this data (150k etc) are being deleted very fast (with in seconds). I checked the indexes and removed them. But still it's taking more time for this table only. I'm deleting the dependant table for this table first and then this table.

    All the delete scripts are incorporated in a shell script. When I'm deleting from console or Embarcadaro tool it's deleting fine. But when trying to delete from shell script, it's taking more time.

    I'm not finding any point on this. Can someone help me on this?


    another is when Loading the IXF files, below information is displaying.

    Number of rows read = 75
    Number of rows skipped = 0
    Number of rows loaded = 75
    Number of rows rejected = 0
    Number of rows deleted = 75
    Number of rows committed = 75

    What is mean by Number of rows deleted?

    Thanks,

  2. #2
    Join Date
    Jan 2003
    Posts
    4,292
    Provided Answers: 5
    For the deleting, are you doing anything different in the shell script than you are doing manually? How many foreign keys are on the table?

    For the load, you must be using LOAD REPLACE which will delete all of the current rows?

    Andy

  3. #3
    Join Date
    Nov 2007
    Posts
    36
    There are 4 foreign keys with the option ON DELETE NO ACTION.

    All the delete scripts are same for the 20 tables. it's just delete from <table name>.

    While deleting, lot of LOG files are being created. If I see this in other angle, writing delete script with CS/RR will help?


    For loading,
    I want to insert the data into existing table.
    I'm using load with insert option.
    I'm confusing with the output statistics whether the same number of records be deleted.


    Thanks,

  4. #4
    Join Date
    Jan 2003
    Posts
    4,292
    Provided Answers: 5
    Quote Originally Posted by ani_dbforum
    There are 4 foreign keys with the option ON DELETE NO ACTION.

    All the delete scripts are same for the 20 tables. it's just delete from <table name>.

    While deleting, lot of LOG files are being created. If I see this in other angle, writing delete script with CS/RR will help?


    For loading,
    I want to insert the data into existing table.
    I'm using load with insert option.
    I'm confusing with the output statistics whether the same number of records be deleted.


    Thanks,
    Deleting from tables is the worst when it comes to performance because of the logs needed and the locks needed. I have found that when deleting a lot of rows, it is better to break it up into smaller chuncks than to do one massive delete. This is true especially when there are foreign keys involved. It does not matter what concurrency you use on the delete (CS/RR), db2 will still put an exclusive lock on each row being deleted. If you are deleting every row from the table, there are alternative methods for doing this which are much faster, like load replace from /dev/null.

    I do not understand you very last statement.

    Andy

  5. #5
    Join Date
    Dec 2007
    Location
    Richmond, VA
    Posts
    1,328
    Provided Answers: 5
    I assume your delete statement on this 60K rows has a where clause, Is there an index to support the where clause? Are their indexes on the 4 child tables to support the foreign key? Do those 4 child tables have children of their own and are indexes in place to support those foreign keys? Also, are you doing any kind of commit during the process to free up the logs? These are a few of the items you may want to check out to correct the length of time it is taking to delete all those rows.
    Dave

  6. #6
    Join Date
    Jun 2003
    Location
    Toronto, Canada
    Posts
    5,516
    Provided Answers: 1
    Quote Originally Posted by ani_dbforum

    another is when Loading the IXF files, below information is displaying.

    Number of rows read = 75
    Number of rows skipped = 0
    Number of rows loaded = 75
    Number of rows rejected = 0
    Number of rows deleted = 75
    Number of rows committed = 75

    What is mean by Number of rows deleted?
    It means that you had tried to load 75 duplicate records, and they were deleted after the index build phase.
    ---
    "It does not work" is not a valid problem statement.

  7. #7
    Join Date
    Nov 2007
    Posts
    36
    There are no where clauses in all the delete scripts and no commits.

    The child tables with the foreign keys have the indexes.



    I tried to delete the data using load from /dev/null and applied SET Integrity command.


    load from /dev/null of del replace into clearance.EVENT_EXCEPTIONS nonrecoverable
    SET INTEGRITY FOR CLEARANCE.EVENT_EXCEPTIONS GENERATED COLUMN, CHECK, STAGING, MATERIALIZED QUERY, FOREIGN KEY IMMEDIATE UNCHECKED


    For 3 tables it's showing the below error:

    SQL3601W The statement caused one or more tables to automatically be placed
    in the check pending state. SQLSTATE=01586


    I think the check pending state applies to the dependant tables also.
    If this happens the dependent tables should also be included which means the scope will be improved.
    When I'm not using the SET INTEGRITY command for these 3 tables, theree is no check pending applied.

    This is confusing when to use the SET integrity command and for which tables? actually I used immediately after load /dev/null command to avoid any check pending states.

  8. #8
    Join Date
    Jun 2008
    Posts
    1
    SET integrity for <tablename> off command suspends constraints checking temporarily by putting the table into check pending state, this command is useful when we want to run LOAD utility. To resume constraints checking after load is done, use (SET INTEGRITY FOR [TableName] IMMEDIATE CHECKED FOR EXCEPTION
    [IN [TableName] USE [ExceptionTable], ...]) which we pull table out of check pending state.

  9. #9
    Join Date
    Dec 2010
    Posts
    3
    I got the same problem. (I use db2 9.7)
    I delete all the records in my table and it reaches the size of transaction log size. So I use load from null.
    Then, it backs to the prompt of db2, but when I try to make sure whether the record has been deleted, using select count(*) it shows
    -----------------------
    SQL0668N Operation not allowed for reason code "1" on table
    "DB2INST1.XXXXX". SQLSTATE=57016
    -----------------------

    So I do the next step,

    ---------------------
    db2 > set integrity for XXXXX immediate checked
    SQL3601W The statement caused one or more tables to automatically be placed
    in the Set Integrity Pending state. SQLSTATE=01586

    I want to use the integrity setting actually, but how to solve the "PENDING STATE"?

    Thx

  10. #10
    Join Date
    Jan 2009
    Location
    Zoetermeer, Holland
    Posts
    746
    Quote Originally Posted by ani_dbforum View Post
    There are 4 foreign keys with the option ON DELETE NO ACTION.
    No action does not mean "do not check". Are there indexes defined properly on both the parent and the child tables? If not this could happen:
    - delete order header?
    - let my 1st check if there are order-lines
    - no index? Too bad, i have to do table scan on order-lines-table for each and every order-header-row I am going to delete

    See my point? Try to execute your delete SQL thru db2batch and let it generate a snapshot for you: very informative.

  11. #11
    Join Date
    Oct 2007
    Posts
    246
    as n i suggested r u trying to load duplicte records, the 3rd phase of load must be triggered in ur case
    Lim
    in ur case the dependent table data needs constraint check hence its placing them in integrity pending state, find the dependent tables from syscat.tables where status <> 'N'
    group ur comments on this.

    regds
    Paul

  12. #12
    Join Date
    Dec 2010
    Posts
    3

    in reply

    The definition between parent and child is setup well (foreign key).
    The deletion header is proper already, to make sure it will produce any problem due to the foreign key issue.
    I use index.
    Btw, I forget to tell you that I activated the HADR. Is that the cause?

    For Loading duplicate data, I did it but using insert command in a file and use db2 -tvf <filename>. I checked the content of the table, it's already there. So the process is good.

    The other thing, when I try to see the content of the table which is dependent to that "previous problem table", is shows the same error.
    DB2INST1.XXXXX". SQLSTATE=57016

    So I did the same thing again, but I'm worried because I make the dependency loose. It will make a new problem at the future.

    Could I just set the dependency back and without seeing the past transaction?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •