On DB2 9.1.8 i test some query with fullscan of 21M rows (part of some ETL process)
I start this query with db2batch and check "db2top" table's snapshot.
I see what Delta Rows Read/s is ~ 60000 with the same Delta Rows Written/s to temp table. But after temp table is filled then fetch process from temp table is only ~4000 rows/s
Temp space is SMS with 16K page size and EXTENTSIZE=2
No other concurrent temp tables reads.
Check if tempspace has filesystem caching enabled. I was told that it can be beneficial for SMS tempspaces.
Thank you, but this not helped.
Also, the test query have ORDER BY clause and when i disable this clause the test query not use temp table.
But fetch speed from real table is the same ~4000 rows/s. (as from temp table)
mitrofun, Are you just testing the performance of a table scan? Check snapshots for asynchronous data page reads vs. physical data page reads and time waited for prefetch. You can try changing prefetchsize, num_ioservers...
Interesting stuff. I think you could include this Size of temporary bufferpool in DB2 | db2ude in the equation.
When you, based upon the above article, trim your temp-space-bufferpools down you could decide to activate file system caching to compensate for the effect.
From this article:
"As a best practice, you should keep the size of your temporary buffer pool small if you have DB2 Stored Procedures creating and dropping temporary tables and these stored procedures are used very heavily. In this case, you are best left with a small buffer pool for that temporary table space."
Does create/drop of temporary tables (does he mean global temporary tables or regular tables that exist for a short period of time?) use system temporary tablespace?