I'm having an issue with one of the databases I support. . Any guidance would be most helpful. The issue is actually having to do with generating Redo log information, so even if you don't have experience with Dataguard any insight would be helpful.

We have a primary database, with a physical standby database located at another physical location. Dataguard sends the archivelogs from the primary database to the standby database.

During normal operations, I sized the logs to be about 2M each, with 5 different redo log groups. This caused the database to checkpoint every 10-15 minutes, so that worked well.

The problem occurs is that there is a user-triggered operation in the database that causes massive amounts of redo to be generated. (This is n "import" of external data, and generates about one million update statements.) What I found was that the primary database got bogged down and was pausing for redo archiving. I increased the size of the log files and increased the amount of groups, and I think I got that problem licked.

The other problem is that I have about a gigabyte in redo logs that need to be sent across the network from the primary database to the secondary database. This is causing massive network slowdowns at both locations, and wont be tolerated for long.

For now, we've paused the import of these files, but eventually we need to catch up. NOLOGGING doesn't help me, because inserts/updates generate Redo no matter what (I didn't know that! I learned something new. ) I can probably rig the import to do a direct path sqlload, which would bypass the redo logs (that's my understanding), but that would invalidate the standby database, correct? Do you have any other ideas how I can avoid these massive redos, while maintaining the integrity of the standby database?

Note: I have to use a physical standby database, I cant use a logical. Our primary database uses an XML datatype.