If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

 
Go Back  dBforums > Database Server Software > DB2 > Explain Recompiling Stored Procedure Behavior?

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
  #1 (permalink)  
Old
Registered User
 
Join Date: Feb 2012
Posts: 2
Explain Recompiling Stored Procedure Behavior?

Hi,
I'm seeing a behavior in DB2 where we create a new schema and after it is created we call COMPILE_SCHEMA. When we start running a transaction load on the system it is initially very slow (approx 30 transactions per second). I see from the snapshot logs that most of the time is spent in one stored procedure. This stored procedure calls several others that delete data from some tables. If we recompile this stored procedure the transaction rate jumps to 500 transactions per second.

Can anyone explain why this happens? Does recompiling the procedure after some transaction go through the system allow it so select a different execution plan? How do I go about getting this behavior right after we create the new schema?

Thanks
Reply With Quote
  #2 (permalink)  
Old
Registered User
 
Join Date: May 2003
Location: USA
Posts: 5,731
DB2 chooses the best access path based on statistics captured during runstats utility (which you should run at least weekly). For static SQL, such as that contained in a SP, you need to rebind all packages for the SP to take advantage of the new statistics and pick the right access path. See the rebind command in the Command Reference manual.
__________________
M. A. Feldman
IBM Certified DBA on DB2 for Linux, UNIX, and Windows
IBM Certified DBA on DB2 for z/OS and OS/390
Reply With Quote
  #3 (permalink)  
Old
Registered User
 
Join Date: Feb 2012
Posts: 2
Thanks Marcus,
Since we are loading 100,000s of rows at a time, would you recommend initially running some smaller set then runstats and rebind before throttling up to full load?
Reply With Quote
  #4 (permalink)  
Old
Super Moderator
 
Join Date: Aug 2001
Location: UK
Posts: 4,649
It will be a problem only in the intial few loads .. Then your stats will generate a decent access path and will not change because of the few 100000 rows.

But, your idea is good too.

You can use ADMIN_CMD stored procedure to issue runstats and rebind.


HTH
__________________
Visit the new-look IDUG Website , register to gain access to the excellent content.
Reply With Quote
  #5 (permalink)  
Old
Registered User
 
Join Date: May 2003
Location: USA
Posts: 5,731
If you have a table that often has a few or zero rows, and then quickly grows in size, and this happens repeatedly for the same table, you can alter the table to volatile and DB2 will use default stats that encourages index usage.
__________________
M. A. Feldman
IBM Certified DBA on DB2 for Linux, UNIX, and Windows
IBM Certified DBA on DB2 for z/OS and OS/390
Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On