Results 1 to 7 of 7
  1. #1
    Join Date
    Apr 2008
    Posts
    11

    Unanswered: insert is slow some time.

    Hi,
    I am facing issue that the same sql is some time very slow while inserting record in a particular table average its taking 1 ms to insert but some time its taking 22 seconds for inserting
    my table have more than 8millions records
    table have blob column,is non partitioin having 1 index not on blob column
    the table is with APPEND ON and VOLATILE ON
    DB2 VARIABLE
    DB2_SKIPDELETED on
    DB2_SKIPINSERTED on

    Can any one tell me how can i over come with this problem.
    working in V(9 FP2

  2. #2
    Join Date
    Jan 2003
    Posts
    4,292
    Provided Answers: 5
    A lot of things can affect insert performance. Things such as tablespace type, referential constraints, indexes, even the method the insert statements are issued.

    If the tablespace is SMS, and needs to grow during a particular insert, than that insert will be slower. The number of referential constraints will slow down inserts, especially if other applications are modifying the other tables. If each insert statement is using literals instead of parameter markers, then the performance of all the inserts is degraded, some worse than others.

    Triggers can also slow down inserts.

    You are going to have to figure out where the bottleneck is on the slower inserts by monitoring you system as a whole.

    Andy

  3. #3
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    If the same insert is usually taking 1 ms, but some times its taking 22 seconds, then the first thing I would look at is lock wait time.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  4. #4
    Join Date
    Apr 2008
    Posts
    11
    HI Marcus,
    Yes their is lock wait as their is bulk insert update in the same table.
    Now how can i reduce the concurrency.

  5. #5
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    First, you might want to reconsider the APPEND option. If both applications are trying to insert at the end of table at the same time, there is likely to be lock contention.

    To improve insert performance in general, you should have more aggressive page cleaning.
    db2set DB2_USE_ALTERNATE_PAGE_CLEANING=ON

    You should have at least as many page cleaners as CPU's (db config).

    You should also make sure your logbufsz is sufficient (db config). If you are doing bulk inserts, then it should probably be about 256 pages instead of default of 8 pages (make sure to increase your dbheap by the same if not set to AUTOMATIC).

    Your logfile sizes (db config) should be set fairly large (at least 10,000 pages instead of default 1000 pages).

    In your bulk inserts (not sure how you are doing this) I would make sure you are committing about every 1,000 rows to improve concurrency. If you are doing this with IMPORT you can use COMMITCOUNT.

    Remember that the speed of the bulk inserts may need to be slowed down very slightly to improve concurrency. In your application, this may be an acceptable compromise.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  6. #6
    Join Date
    Apr 2008
    Posts
    11
    first, you might want to reconsider the APPEND option. If both applications are trying to insert at the end of table at the same time, there is likely to be lock contention.
    ==The table is already in append mode
    To improve insert performance in general, you should have more aggressive page cleaning.
    db2set DB2_USE_ALTERNATE_PAGE_CLEANING=ON
    ==I have 8 CPU and the page clearners is 8
    You should have at least as many page cleaners as CPU's (db config).

    You should also make sure your logbufsz is sufficient (db config). If you are doing bulk inserts, then it should probably be about 256 pages instead of default of 8 pages (make sure to increase your dbheap by the same if not set to AUTOMATIC).
    ==My logbufsz is 1024
    Your logfile sizes (db config) should be set fairly large (at least 10,000 pages instead of default 1000 pages).
    == log file size is 12500
    In your bulk inserts (not sure how you are doing this) I would make sure you are committing about every 1,000 rows to improve concurrency. If you are doing this with IMPORT you can use COMMITCOUNT.
    == not using import, using insert,

    Remember that the speed of the bulk inserts may need to be slowed down very slightly to improve concurrency. In your application, this may be an acceptable compromise.

    Only thing is the same thing was working perfectly till 2 weeks ago but now on db side nothing has changed but dont know why its happenning although their is no deployment of new code in the application also.

  7. #7
    Join Date
    Sep 2004
    Location
    Belgium
    Posts
    1,126
    Quote Originally Posted by jagdishrawata
    first, you might want to reconsider the APPEND option. If both applications are trying to insert at the end of table at the same time, there is likely to be lock contention.
    ==The table is already in append mode
    Better switch off append mode then. That way, concurrent inserts have more chance to happen in different pages. (But only if there is enough free space, of course; if not, turning off APPEND will decrease performance.)
    --_Peter Vanroose,
    __IBM Certified Database Administrator, DB2 9 for z/OS
    __IBM Certified Application Developer
    __ABIS Training and Consulting
    __http://www.abis.be/

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •