I have a 660 table database which is about 20G. About 8 tables have more than 1,000,000 rows and their size is bigger than 1GB. The next 30 something tables has rows between 100,000 and a million and their sizes are between 100M and 1G. A procedure creates a stored procedure where for every table two commands are executed. They are:
update index statistics table_name
For the whole databse it runs for about 20 hours. It runs on an IBM H50 333Mhz two processor machine with AIX 4.3.3. ASE version is 126.96.36.199. Machine until recently had 1.5G memory with 1.2G assigned to ASE. It runs just ASE. I tried to speed this up with the followings:
- Added 1.5G memory to it. Created two 512M ramdisks and moved tempdb into this 1G memory portion. Removed tempdb segments from master. No improvement compared to a disk based 600M tempdb. Not a picosecond. I was surprised.
- created a 16K mem pool in the default data cache and assigned half of the cache to it. Proc cache is at 25%.
- created - and later dropped - a dedicated 256M named cache to tempdb. Gave no boost at all. Neither to disk based nor to ram based tempdb.
- Increased short buffers to 15,000 from 500.
- run the first command "with consumers 2". Did not see any worker process to kick in although I configured for 10.
I gained only about 10% speed increase. Am I at the limit of this machine ? I doubt it.
When I look sp_sysmon during Update Index Statistics, I see just one engine to do the work. What is the trick to kick the other engine into participation ? I am thinking to put tempdb into smaller but more devices, like ten 100MB. Manual says that the number of devices kicks pallarelism in (?!)
ASE eats the first 350 tables - all small - in TWO seconds. 95% of the time is spent on the 6 big tables. 75% on the first two biggest. I run the procedure in every Sunday.
Any good advise ?
P.S. A full backup to tape takes one and half hours, so I do not think that the SCSI I/O is saturated.
Originally posted by János Löbb
When I look sp_sysmon during Update Index Statistics, I see just one engine to do the work. What is the trick to kick the other engine into participation ?
Since you are doing update statistics to all the tables in a database from a single stored procedure it will run on a single CPU. Can you break the stored procedure, lets say two (since you mentioned you have two processors) one updates 3 big tables and second one the remaining, this way it could use both the processors.
CPU Yields by Engine per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Engine 0 0.1 1.7 38 0.3 %
Engine 1 23.7 619.5 14249 99.7 %
------------------------- ------------ ------------ ----------
Total CPU Yields 23.8 621.2 14287
So, only Engine 1 is working, although there is pleanty to do for both. Table1 has 1,135,000 rows and takes 382MB, Table2 has 1,406,000 rows and takes 338MB. One would think that tempdb is loaded with all the temp files, but it is not the case. from the 1024MB tempdb only 290KB is used. Interestingly Engine 0 does all the network traffic. All the disk reads ar APF-reads.
There are zeros in the Worker Process Management and in the Parallel Query Management sections for counts. Is there a way to force the other engine to do some work without partitioning the tables ?
please check the max online engines configured in your ASE by executing sp_configure "max online engienes"
if the value is one then change it to to by executing below
sp_configure "max online engienes",2
and take a restart of the ASE and see if both the engines are being used.