Results 1 to 8 of 8
  1. #1
    Join Date
    Feb 2015

    Unanswered: How many inserts per second is alot.

    We are running DB2 LUW 10.1 on a RHEL5.11 ESX VM with 256G of ram and 24 CPU's, 8Ggig SAN, IBM 8700 Frame and IBM SVC's. We seem to be hitting a limit of how the database is tuned today and are beginning the steps to troubleshoot. I'm the linux admin and I don't see any indication of cpu or IO wait. The workload is all all of the above, purge, inserts, long running queries that tie up logs and cause sync IO, billing jobs ect... We continuously run between 2000-3000 inserts per second. Does that seem like alot compared to other peoples systems? Would you expect a well tuned system with similar resources to be able to go higher?

    Thanks in advance

  2. #2
    Join Date
    Apr 2006
    Provided Answers: 11
    if they are all on the same table, that could create a problem
    maybe the organization of table could be revised ?
    partitioned table ? inserts always at the end (append mode ?)
    some details would be helpful
    Best Regards, Guy Przytula
    Database Software Consultant
    Good DBAs are not formed in a week or a month. They are created little by little, day by day. Protracted and patient effort is needed to develop good DBAs.
    Spoon feeding : To treat (another) in a way that discourages independent thought or action, as by overindulgence.
    DB2 UDB LUW Certified V7-V8-V9-V9.7-V10.1-V10.5-V11 Fundamentals- DB Admin - Advanced DBA -Dprop..
    Information Server Datastage Certified

  3. #3
    Join Date
    Dec 2007
    Richmond, VA
    Provided Answers: 5
    There's a lot of depends out there. Order of inserts vs clustering order of table. Freespace, how many indexes, simultaneous access by multiple users, etc... What type of insert, one row by one user, then the next? A batch process, inserting row by row, inserting an array?
    In a batch process depending on the table I have been able to get between 10K and 14K on a consistent basis, have hit higher rates of inserts(a 19K peak I think), but not consistently. Google it, there are some scripts out there that you can grab that you can test out on your machine.

  4. #4
    Join Date
    Feb 2015
    More info from our DBA's.

    We have hundreds of tables, it's not partitioned, not in append mode. 90 million inserts a day, 75% of which happen from 11am to 5pm.

    It's a hybrid odtp / reporting database.

    We just don't have another database to compare it to, so before we spend tons of time shaving a milli second here and there, we would like to know if we are even in the ball park for what can be expected. If this is the case, we need to redirect the app team to spread customers to multiple databases gaining a better return on our time increasing capacity.

  5. #5
    Join Date
    Apr 2012
    Provided Answers: 27
    You have not defined the meaning of "hitting a limit of how the database is tuned" ,so it is unclear what exactly is the problem you want to solve.

    In my experience, much depends on the I/O throughput you can achieve in the VM, and what resources are assigned to the logical-machine running the db2 server. I've seen lousy I/O throughput on some virtualised environments (particularly in cloud ) when you have no idea what unrelated applications are sharing the same I/O infrastructure.

    For reporting, 90million rows per day is not large, assuming a relatively low average-row-length and limited lobs/clobs/blobs. If the app is using plain logged inserts (as distinct from load) then log-archival costs will be significant.

    If speeding up logged inserts is the concern, the DBA can adjust various tuning knobs, this is well documented, and factors such as number/type of indexes, compression, logging configuration, archival configuration come into play along with many others, in addition to the system factors such as actual throughput available for the VM running the DB2 server.

  6. #6
    Join Date
    Feb 2015

    It depends

    There are a number of factors that have to be considered.

    1) Tablespace layout
    - Extent size
    - container storage - number of disks, raid level, stripe size

    2) Clustering / Range partitioning / indexing
    - if there is a unique index or clustering index it has to be the first index on the table (Index creation order is important

    3) Table design
    - For fast insert table should be in append mode, this avoids the lookup on the freespace list during the insert
    - chars vs varchars, varchars can lead to overflow rows, which slows things down

    4) Logging
    - Active log directory should be on fast disk using raid 1+0
    - Log buffersize has to be tuned correctly
    - log file size tune correctly to avoid a high number of log archives per hour (This also depends on your recovery window)

    That all said 2000 to 3000 insert/sec on the hardware you describe is not a lot. I have worked on systems with 10,000+ per sec.


  7. #7
    Join Date
    Feb 2015
    I understand and apologize for the tough question. There isn't anything I can do from a storage point of view at this point, we have more than enough server resources, DirectIO, 99% db2 cache hit ratio with 6M 32K pages, 8-10ms read response time, and sub millisecond write response. We actually never hit disk due to tons of cache on IBM SVC's. From historical san graphs, we've pushed much more IO from this server in the past. I've implemented every recommended VMware, IBM and DB2 best practice tuning variable from a server point of view I've been able to find. Not having another database this large in our environment and something to compare it to is one of our problems, from what your saying 90M inserts a day isn't alot, and that helps, tells us we have work to do on the existing database.

    We need to increase insert times, ill send over what you mentioned to the DBA.

    The database seems to go into a mode where it holds logs, increases past our softmax of 2G to 5+G, then sync IO starts, increases response time and is a snowball effect of slowness.

    Thanks for your input, looks like we have more troubleshooting to do on the existing DB.

  8. #8
    Join Date
    Jun 2003
    Toronto, Canada
    Provided Answers: 1
    Your system doesn't seem to be anywhere near capacity. Tell your DBAs they can try the MONREPORT.DBSUMMARY() system stored procedure as the first step to identify the performance bottlenecks and eliminate them one by one. Also keep in mind that in a virtualized system hosting a database you should not overcommit RAM or CPU; I've seen bad things happen to DB2 on ESX especially with hyperthreading enabled (see e.g.
    "It does not work" is not a valid problem statement.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts