I don't know what a lac or lcd is, but it must be a lot. I suspect that there is some waiting on the log buffer and active log. Tuning of these parameters would probably help (significantly increase their size).
If there was some way to break the SQL delete into multiple SQL statements that delete smaller chunks and do intermediate commits after each statement, that might help some. You could also run them in parallel.
Another way to get intermediate commits, is to open a cursor on the rows to be deleted using the WITH HOLD option, and then delete where current of cursor. Then you can do a commit every 1000 rows without losing position on the cursor. Not absolutely sure if this would speed things up, but would definitely help on recovery if the delete terminated.
Another technique (if the remaining rows are fewer than the ones deleted) is to extract the remaining rows onto a file and load replace them back into the table. This would effectively perform a reorg at the same time.
gulshan , not many outside the indian sub-continent use lakh or lac . To make it universally compliant you should mention numbers in their thousands and millions and not in lakhs and crores. hope you don't mind this comment
The sqluload api of db2 has done the magic. The only question is before loading the tabke using this api, it asks to queue the tablespace where table resides . I used sqluvqdp api to queue the tablespace.
But can some one clarify why Quiescing is required before loading the table?
Quiescing is the action of forcing all users of the system/database when you need to perform administrative taks.
Let's say I am performing a load which needs me to access the tables which are frequently accessed by users, then quiescing the same would allow me to gain exclusive access to that tablespace/table till I complete my actions so that my load goes through smoothly as well i am assured that no intermediate actions are performed on the table.
In case you have to recover and rollforward the tablespace, your LOAD .. REPLACE.. NONRECOVERABLE will mark the table as inaccessibe .. You loose all data you inserted after that point in time ...
With IMPORT .. REPLACE your table will be 'intact' even after the rollforward ..
Try the following on a small test database ... Your database should have LOGRETAIN ON
1) Create a table say tab1
2) Populate it with a few rows
3) backup your database
4) IMPORT ... REPLACE null file into the table
5) Insert a row into the table
6) Resotre the backup image and rollforward
7) Connect and access the table... You will get the row you inserted after the IMPORT
To test LOAD, do the same, this time, step 4 will LOAD ... REPLACE nonrecoverable ... In step 7, your table will be inaccessible ...
Originally posted by dollar489
What would be the benifit of dummy IMPORT over dummy LOAD? Aren't they same!!!
Visit the new-look IDUG Website , register to gain access to the excellent content.