Results 1 to 4 of 4
  1. #1
    Join Date
    Dec 2003
    Location
    China,Fujian
    Posts
    8

    Unanswered: delete data slowly

    env:informix database v7.30 uc2 work on SGI irix6.5
    question:when i delete from table A (5000k rows), database performance decline rapidly .the process take up so much
    cpu and memory that it effect other running program .

    from online.log file i find there is a notable change about checkpoint:

    checkpoint completed:duration was 20 seconds .

    checkpoint time is longer than before (3 seconds ) .


    Thanks for any suggestions!
    Last edited by zzx806; 12-25-03 at 21:54.

  2. #2
    Join Date
    Nov 2003
    Location
    Mumbai, India
    Posts
    92
    Hi,

    When we do some huge manipulations like delete from a table approx.(5000k rows), we have to make sure that the delete statement does an index scan ideally, and not sequential scan for a table having high number of data & index extents. Sequential scan benefits maximum when the data is stored in contiguous manner, else it results in a havoc and takes up most of the system's resources like memory, disk etc. affecting adversely the other process running concurrently.

    The other point is that the Checkpoint duration going high (3 to 20 seconds), when the deletion is in progress is obvious. You can reduce this by either reducing the checkpoint interval (CKPTINTVL of config file) so that shared memory buffers are flushed more often, or by reducing the LRU_MAX_DIRTY of config file, which controls the percentage of the LRU maximum dirty limit, so that the flushing job starts early.

    One more point to be kept in mind always is that the updated statistics always helps and beneficial for any SQL operations. It guides the internal optimizer to to takeup/follow a correct path to the data and index input/output operation.

    Regards,
    Shriyan
    Last edited by vpshriyan; 12-26-03 at 02:13.

  3. #3
    Join Date
    Oct 2003
    Posts
    706

    Exclamation

    If you are deleting 5,000 rows at a time, also consider the possibility that the DBMS may be laboriously updating the indexes for each row! Some DBMSes will do that; others are smarter.

    What will consistently work, if you can do it, is to drop or turn-off the indexes on the table, then do the mass delete, then turn the indexes back on again. (When building an index in toto, DBMSes can use sorting and other algorithms that don't apply to individual changes to the index.)

    Another "trick," again if you can do it, is to perform the delete as a series of transactions, say each one deleting only 500 rows. What you're trying to do here is to stay comfortably within the readily-available memory buffers that the server's likely to have without "straining," and also to keep the transaction rollback requirements small. In other words, you're changing your algorithm to keep the queries "friendlier."

    If you actually plot the performance of a delete-query as the number of rows increases, you'll usually see a "knee shaped" performance-curve: up to a certain size the increase is linear, but then the line bends much more sharply upward, "hitting the wall." This is an informal test of course -- many factors can influence the behavior of a particular query -- but it's worth experimenting to see if this proves to be true in your case. (This kind of "hitting the wall" performance effect is commonly seen in a variety of situations when resources become tight, e.g. "thrashing" in a virtual-memory subsystem.)
    Last edited by sundialsvcs; 12-26-03 at 12:38.
    ChimneySweep(R): fast, automatic
    table repair at a click of the
    mouse! http://www.sundialservices.com

  4. #4
    Join Date
    Apr 2003
    Location
    Phoenix, AZ
    Posts
    177

    Re: delete data slowly

    Your extended checkpoint times are based on the amount of information thats's changed and the amount of time neccessary to phsyically write those changes to disk.

    As suggested, you might try dropping LRU Min and Max dirty to start the cleaning process sooner.

    The frequency of the checkpoints can also get out of hand if your physical log size is too small, thus requiring more frequent checkpoints.
    Fred Prose

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •