Hi guys - hope you can help with this.

I'm writing a Windows C++ app that needs to insert up to several thousand inserts/second into a Sybase 12.5 DB running on SOLARIS.

So, in simple terms, my code does something like this:

ct_command ("begin transaction")
ct_command (big string with multiple inserts statements)
ct_command ("commit transaction")

If, for example, I run my app with parameters of:

updates/second: 1000
max batch size: 50

then it will send 20 batches of 50 (50 inserts within the "big string" mentioned above) split out equally over the second.

This all works fine and I can get up to about 2000 inserts/second (response time for one batch of 50 is about 25ms). Happy days.

If, however, I up the batch size to 100 (so "big string" with 100 insert statements) then something funny happens. I use a library that is a wrapper around the sybase api (ctlib) so this handles all errors/exceptions etc. This library returns no errors but when I do a select count(*) from the db I get missed rows. So in the example above if I ran for 10 seconds then queried the db I would get 10000 rows. But, if I upped the max batch size to 100, then ran for 10 seconds I would get say 7000 rows. It's like the max batch size is around 70 but I get no errors and the transaction is committed ok.

Since I use this wrapper library (which is well established) I am not that familiar with the open client api. I tried switching on ct_debug with CS_DBG_ERROR but saw nothing and then with CS_DBG_ALL and saw too much.

I am not too sure what to do next. I have spoke to our DBAs and they insist it is an issue with my app (and they are probably correct) but is there anyway to prove this (for example, is there a way to log all the sql that is sent to the server in open client).

Thanks - hope this makes sense.

H.