Results 1 to 8 of 8
  1. #1
    Join Date
    Nov 2007
    Posts
    27

    Unanswered: Problem with delete Statement IBM DB2-UDB 8

    Hello,
    I am facing a problem with DELETE statement. I have to delete around 2 billion records from a table with 150 columns. on execution of the delete statement, it fails after some time with message " Log Space full".

    I am having a log space of 2 GB. But I guess thats proving in sufficient. I am using one indexed column to delete the data. If I use other column as filter statement I have to run the query more than 1000 times.

    Is there any way to overcome this?

  2. #2
    Join Date
    Jun 2006
    Posts
    471
    in the same uow do - alter table for not logged initially and the delete
    or delete in small parts and commit after each
    Best Regards, Guy Przytula
    DB2 UDB LUW certified V6/7/8

  3. #3
    Join Date
    Nov 2007
    Posts
    27
    Thanks.
    But if we use not logged initially , and the statement fails due to some reason,
    The table gets locked and needs to be dropped. Please correct me if I am wrong. so its a little risky.

    So do you see any other way?

  4. #4
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    Write a stored procedure that processes the data from a cursor with hold option (so the cursor is not closed at commit), and do delete on the appropriate rows and then do a commit every 100 deletes or so. I do this all the time.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  5. #5
    Join Date
    Apr 2007
    Location
    Chicago
    Posts
    68
    I agree with Marcus A, except I'd use a commit frequency based on time rather than number of records. If you need concurrency with application users, probably 15-30 seconds. If you don't need concurrency, 60-120 seconds. If you want to do the record count, you can experiment with the number of deletes per commit to get to the desired frequency.

    Another option is to code the stored proc to delete ranges that are passed as parms. That way you could run several instances of the stored proc concurrently each deleting a mutually exclusive range.

  6. #6
    Join Date
    Dec 2007
    Location
    Richmond, VA
    Posts
    1,328
    Provided Answers: 5

    try it in reverse

    Raj,
    You don't say how many rows would be left in the table after deleting your 2 billion rows. I would think less than 2 billion. how about unloading the data that would be left and then perform a load/replace, should be a much quicker process and no logging.

  7. #7
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    Quote Originally Posted by jsharon1248
    I agree with Marcus A, except I'd use a commit frequency based on time rather than number of records. If you need concurrency with application users, probably 15-30 seconds. If you don't need concurrency, 60-120 seconds. If you want to do the record count, you can experiment with the number of deletes per commit to get to the desired frequency.
    The stored proc that I wrote actually looks at elapsed time and number of rows deleted to determine commit frequency (which ever comes first). I also have a parm that lets you input the maximum number of rows to be deleted before the SP stops. Don't forget that final commit at the end.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  8. #8
    Join Date
    Sep 2006
    Posts
    97
    If the table has 2 billion rows and if you are deleating all the rows load with /dev/null

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •