Results 1 to 6 of 6
  1. #1
    Join Date
    May 2006
    Posts
    6

    Unanswered: deletes taking too long

    I am trying to delete 2,500,000 rows from a table I have. I select the 2,500,000 rows I want to delete into a temporary session table and then use the following command

    Delete * from table where account_id in (select id from session.tempacct)

    This delete operation had been running at the command line for 3 days before I killed it. I know that it should not be taking anywhere near this long. I think it might have something to do with the way the table or some DB2 parameters are set up. Does anyone know what might be causing this problem? Please let me know if there is any additional information I can provide you with.

    Thanks
    Last edited by mpathare; 05-22-06 at 15:11.

  2. #2
    Join Date
    Jan 2003
    Posts
    4,292
    Provided Answers: 5
    Deletes are very, very expensive to perform. And it gets worse the more rows that are deleted. The reasons are logging, locks, referential checking, etc. The best solution is to break up deleting that many rows into smaller units of work instead of one big one. See if you can delete them in 10,000 row chunks or smaller and COMMITing after each chunk.

    HTH

    Andy

  3. #3
    Join Date
    Aug 2001
    Location
    UK
    Posts
    4,650
    The number of indexes on the table also make a difference

    Sathyaram
    Visit the new-look IDUG Website , register to gain access to the excellent content.

  4. #4
    Join Date
    May 2006
    Posts
    6
    The number of indexes on the table also make a difference
    What do you mean by this? Can you explain how or why the number of indexes make a difference?

    I have 1 index on the tables primary key.

  5. #5
    Join Date
    Jul 2005
    Posts
    47
    Sathyaram,

    I think he should be able to use the Alter Table not logged initially and then delete the rows. It should save him time.What do you suggest.

    Thanks ,
    Venky

  6. #6
    Join Date
    Sep 2003
    Posts
    237
    For each record to be deleted, you are reading on the average 1/2 of the temp table; with 2.5 million records this is a huge unnecessary read. I would write a stored_procedure where I read the temp file sequentially and delete records; you can put a counter and COMMIT every 5000 records or so.
    mota

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •