I am working with db2v7.2 AIX.
in our application we have:
DELETE from table_a where id='aa';
the size of table_a is 10,000,000 and Estimate cost for this sql statement
is 15,553,342. How can I write this sql more efficient.
Originally posted by RKrick
Do you have an index on id?
What about the cardinality of this column?
Did you "runstats" the tablespace?
1- yes I have index on id
2- the cardinality of this column is about 1,000,000
3- yes I did runstate,
actually I am looking for an alternative way instead of delete with less timeroun, for example import empty file in that rows (where id='aa')
is it possible?
No, you cannot "import" only these pages (overwrite them). If you want to use import/export, start a "Search this forum" with the keywords "log full". You'll find some recomendations for implementing it.
If you still want to try it with sql, I would do the following:
- reorg the tablespace using the id index as cluster (it may impact another packages!)
- lock the tablespace before issuing the delete
LOCK TABLE <tab_name> IN EXCLUSIVE MODE
so you can save a lot of work in the lock manager
I would also check if I have another "bottlenecks" (too many indexes, triggers, RI without indexes, etc).