We have some jobs written in unix/perl scripts running on sybase server .
Lately,we have issues with
- Long running stored procedures which are not picking up the right
plan..The stored procedure which was running fine originally now takes
loong time to complete due to wrong execution plan.
- some times we get the following error in the log when running a
stored procedure/script though sybase server connection still alive..
Open Client Message:Message number: SEVERITY = (5) NUMBER = (6)Message String: ct_results(): network packet layer: internal net library error: Net-Library operation terminated due to disconnectDBD:ybase::st execute failed: OpenClient message: LAYER = (5) ORIGIN = (3) SEVERITY = (5) NUMBER = (6)Message String: ct_results(): network packet layer: internal net library error: Net-Library operation terminated due to disconnectDBD:ybase::st execute failed: OpenClient message: LAYER = (5) ORIGIN = (3) SEVERITY = (5) NUMBER = (6)Message String: ct_results(): network packet layer: internal net library error: Net-Library operation terminated due to disconnect
The system is pretty much stable in terms of code changes however
the sybase performance is poor .
make sure update statistics are regularly run on the db,
monitor the execution plans of stored procedure by
set showplan on and verify index usages
make sure SARGs are valid and datatypes are compitible in where clause
When batch jobs are running, there is a job run by sybase DBA's
to run update stats on all the tables... Since there is a quite a bit
a chance of same table to be used in both the processes, does it
impact performace overall?
Aother possibility would be, how much data is inserted in the tables accessesed by the SP in question. It is also possible that the nature of data can skew the statistics for a day or two and everything returns to normal after that - been thru it.
How often do you run update statistics/sp_recompile? Also, can you replicate the problem in any other server (Dev./QA), to make sure it only the SP not the data?
There are too many nuances to cover all of them. Assuming you have like datatypes in your SARG, and stats are updated 2x daily, and indexes cover the queireis, so no t-scans or idx-scans are consuming resources ..
either exec sproc with recompile
or recompile the sproc using the 'with recompile' option
this will force a current query tree at execution time.
There is an expense to this if the exec is within a loop ..
1 If you are using ASE 15, ensure you run update index statistics. Twice a day is a bit too often, once a day after the large load process is enough.
2 Can you post:
- normal and abnormal times
- on what basis you have identified that it is running the "wrong" query plan (post both right and wrong QPs)
- sp_who and sp_lock output when the proc is running slow
3 The disconnect looks like the connection is actually inactive (and has been terminated by a decent tcp_keepalive setting), so do not change that, find out why the proc is inactive.
4 BTW, exec with recompile is certainly valid when testing, and chasing down this kind of problem, you ensure that a new query plan is built and used.
exec with recompile is completely incorrect when placed in code which is used in production (imagine every user executing the proc, multiple times, building a query plan each time ... may as well run static or dynamic SQL).
I think, before the year is out, the spammers will be broke and back to their crack pipes, and we will still be here. What do you think ? IIRC, both you and I are not going to give up our $1,000+ per day jobs to spam for a possible $200 per day.