I am trying to delete 2,500,000 rows from a table I have. I select the 2,500,000 rows I want to delete into a temporary session table and then use the following command
Delete * from table where account_id in (select id from session.tempacct)
This delete operation had been running at the command line for 3 days before I killed it. I know that it should not be taking anywhere near this long. I think it might have something to do with the way the table or some DB2 parameters are set up. Does anyone know what might be causing this problem? Please let me know if there is any additional information I can provide you with.
Deletes are very, very expensive to perform. And it gets worse the more rows that are deleted. The reasons are logging, locks, referential checking, etc. The best solution is to break up deleting that many rows into smaller units of work instead of one big one. See if you can delete them in 10,000 row chunks or smaller and COMMITing after each chunk.
For each record to be deleted, you are reading on the average 1/2 of the temp table; with 2.5 million records this is a huge unnecessary read. I would write a stored_procedure where I read the temp file sequentially and delete records; you can put a counter and COMMIT every 5000 records or so.