RHEL 6.2 - DB2 9.5 FP8.

Hi, I have a HADR database that is supposed to be 24/7 but we do have a maintenance window from 12:00 AM to 5:00 AM sunday. This is why I have been running offline reorg/reindex/runstats. We have the LOGARCHMETH1 only setup for log archiving.
Last night I got a call that client would not acces the database. When I check, I see that the log file got full while it was reindexing a large table. For some reason it keept trying this and filled the whole drive (more than 25 GB log files in LOGARCHMETH1 target directory). The database is only 5GB.

My log size was 1024 and logprimary = 30, logsecond = 100. I know that I can not go beyond 250 log files. I have increased the log file size to 10240 and the problem was fixed but I have questions.

1- What is the best practices for log file size, handling for a HADR database that will eventually grow to at least 500 GB ?
2- How would increasing the log file size would effect recovery in case we have to recover database at some point (This is a HADR, TSA and ACR database) ?
3- I am doing this on SQL Server database but I don't know if it would work on DB2 HADR database.

- Online backup database before running reorg/reindex
- Remove Archive logging.
- Backup database
- Run reorg/reindex/runstats
- Put back archive logging
- Backup database.


- Increase log size just before reorg/reindex and disconnect all users.
- Run reorg/reindex and decrease log size again.

Client transactions are not too big and log is not need to be more than 2 GB at most but maintenance will need much bigger size.

Please let me know your thoughts. Thank you.