Normally I would not base it on number of rows, but how long it takes. I would commit just enough rows so that the time does not exceed the lock-wait-timoout. This is assuming that you have enough log space.
But since you are deleting 50%, and are on V9.7, I would consider copying the rows you want to keep, then TRUNCATE the table, then copy the rows back.
When using a Stored Procedure to do the commit, you will be looping and issuing a commit every x (e.g. 1000) rows using a loop counter.
The reason you do this is to limit the amount of time the locks are held and the amount of log space needed. How much time will be consumed for each delete depends on the data, triggers, FK constraints, etc. The same for the log space needed. As they say, your mileage will vary. You should test it in a test/development system to see what the optimal number is.
Why do you set logbuffer size 3 times of the logfile size?
When db2 try to flush the log buffer to logfile, it need to switch the file.
maybe it will degrade the performance。
and for performance consideration i think the commit count is the bigger the better if you have enough log space。 and you can lock the table in exclusive mode before you delete data to avoid getting lock row by row。