Results 1 to 10 of 10
  1. #1
    Join Date
    May 2003
    Posts
    6

    Red face Unanswered: Performance issues related to deletion of data from large tables

    Hi,

    I am facing performance problems while deleting data from a big table. Deletion is taking lots of time. The particular table is of size 100GB. The same is having one clustered index and has 10 partitions. Actualy to maintain the size of the table i archive the data monthly basis using a script. This particular script delets 2,00,000 rows at one chunk and commits it and then again prioceeds. Recently i recreated index becoz of same performance problem. Previously it use to take 5 minutes to delete 2,00,000 records but now after recreating the index it was faster and then again it is taking 2 - 3 hrs.

    Please suggest how to slove the same.

    Cheers,

    Ranjan...

  2. #2
    Join Date
    Feb 2002
    Location
    Willy is on vacation
    Posts
    1,208
    Hiya Ranjanm,

    How often do you run UPDATE STATISTICS?

    Cheers
    Willy

  3. #3
    Join Date
    Sep 2002
    Location
    Hong Kong
    Posts
    159
    Does the clustered index contain columns that would be useful in the delete? If not consider dropping the index and rebuilding it afterwards. You can probably rebuild using the sorted data option to save time.

    Incidentally why do you have a clustered index on a partitioned table, surely you're getting a significant amount of partition skew?

  4. #4
    Join Date
    May 2003
    Posts
    6
    Hi Willy,

    This being a big table of 200GB am not able to run UPDATESTAT on the same. This table has only one index and that to unique, primary clustered and has 10 partitions. Recently when the performance went bad i dropped the entire index and recreaed the same. After re-creating the index the deletion became very fast i.e able to delete 200000 rows within 5 minutes, but again after deleting around 10,00,00,000 rows the deletion is again taking lots of time now.
    What is the other alternative i can try.
    Suggest...

    Note.. After every 2,00,000 rowes i am commiting the same in the script.

    Ranjan...

  5. #5
    Join Date
    May 2003
    Posts
    6

    Red face

    Hi,

    Yes!!the clustered index contains that are usefull in deletion...As per my prev mail i dropped the index and re-created it, but still no effect. Well after re-creating the index deletion became faster, but again the deletion became slow.

    Ranjan...

  6. #6
    Join Date
    Sep 2002
    Location
    Hong Kong
    Posts
    159
    Originally posted by Ranjanm
    Hi,

    Yes!!the clustered index contains that are usefull in deletion...As per my prev mail i dropped the index and re-created it, but still no effect. Well after re-creating the index deletion became faster, but again the deletion became slow.

    Ranjan...
    High sounds like page or table fragmentation. Over time tables begin to contain unused space. It is possible to have 7 out of every 8 pages unused (ie no data stored on them) and the 8th page to only contain a single row. This makes accessing the data via any kind of table scan (even if it's being positioned by the index key) very expensive. Since you're deleting a large number of rows, and it's getting progressively slower I suspect this is what is going on. If each row was accessed via an index read the time to delete each row would remain approximately the same, if anything become quicker as the number of index levels reduces.

    The solutions are:

    1) Drop and recreate the clustered index - as you've seen this speeds things up significantly because it unfragments the table by copying the data to a new set of pages and building the clustered index on the newly allocated pages and deallocating the old pages.

    2) BCP the data out, drop and create the table, then BCP in. This will probably take longer than option 1

    3) BCP the data, truncate the table then BCP in. This will probably take about the same amount of time as option 2.

    Some other things that might help you along the way. Have you configured a large I/O pool for this? Even when deleting rows, the pages still have to be read into cache, and using a 16K I/O pool may speed this up by reading 8 pages per I/O rather than 1. Large I/O pools can be created without having to restart the ASE.

    I doubt enabling parallel queries will help at this point because they are not used for update or delete statements.

  7. #7
    Join Date
    May 2003
    Posts
    2
    Further to Richard's note, how many rows are actually in the table and how many rows do you remove on a monthly basis? Also, are inserts to the table made in clustered key order or are they random with respect to this key? (I'm assuming that you're deleting a range of records based on the clustered key rather than performing point-deletes).

    Russ
    Last edited by russ_allen; 06-01-03 at 19:35.

  8. #8
    Join Date
    May 2003
    Posts
    6
    Hi,

    Thanx for taking out time to reply to my queries. Yes i have thought of these options, but since this table being so big and huge it takes lots of time for the 1st and second option. In any case, by now it seems that i have to go through any of these steps.
    Will inform once i complete the same..

    Cheers

    Ranjan....

  9. #9
    Join Date
    May 2003
    Posts
    6
    Hi,

    There are around 40,00,00,000 rows in the table and the size of the table is around 180GB. I remove 8,00,00,000 records in a month. THe inserts are with respect to clustered index key. I delete 2,00,000 rows at a time and commit the same after each chunk is executed....

    Ranjan...


    Originally posted by russ_allen
    Further to Richard's note, how many rows are actually in the table and how many rows do you remove on a monthly basis? Also, are inserts to the table made in clustered key order or are they random with respect to this key? (I'm assuming that you're deleting a range of records based on the clustered key rather than performing point-deletes).

    Russ

  10. #10
    Join Date
    Sep 2002
    Location
    Hong Kong
    Posts
    159
    Originally posted by Ranjanm

    Will inform once i complete the same..
    Keep us posted, pleasure to be of assistance.

    Richard

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •