The size of your log file depends on the amount of SQL updates, inserts, and deletes performed on your database. This would include the use of the load utility if you used logging during the load. SQL statements that only do selects do not use logging.
In addition to the "amount" of SQL activity that is logged, you need to consider the commit interval. If a transaction (or load utility) uses a reasonable commit frequency (every 1000 rows or less) then only a moderate number of log files with a moderate size for each file is necessary. If you go a long time without commits, then more and larger log files are needed. Once a transaction is committed, the entries in the log for that unit of work is no longer required on the active log, and that portion of the log can be reused.
There is a bit more to this depending on whether you use circular logging or log retain (of the archived log). See the manuals for more information.
Why do you think that smaller log files use more CPU?
Well depending on the value of softmax it also means that the database has to commit changes to disk more frequently if the log files are smaller. To my understanding, this is how it works - when the log buffer fills up, it writes to the logfile and when the logfile file fills up upto softmax, it will commit the changes to disk. Besides, ofcourse the overhead of opening, closing and reinitializing files. So it makes perfect sense that smaller logfiles will use more CPU.
Not exactly. Data is written from the log buffer to the active log on disk when a commit takes place, or when a checkpoint takes place, or when the log buffer is filled up. Just because data from the log buffer is written to disk (in the active log) does not mean it is committed.
Lowering the value of softmax will cause the database manager to trigger the page cleaners more often, and to take more frequent soft checkpoints. Page cleaners write changed (dirty) data pages from the database buffer pools to disk (for the tables and indexes that have been updated), but these updates may not necessarily have been committed, and could be backed out with a rollback of the transaction that performed the update.
Likewise, data may have been committed (and written from the log buffer to the active log on disk) and still remain in the database buffer pool without the table and index updated on disk. Therefore the updating of table and index data is asynchronous with the commit, sometimes occurring before the commit, and sometimes after. In a worst-case scenario (from a performance viewpoint) the commit and the update of the table and index data to disk (other than in the active log) takes place in a synchronous or quasi-synchronous time frame.
This discussion is relevant for the size of the log buffer, but not sure about how useful it is with regard to the active log size on disk. Except that you dont EVER want the active log to fill up because there is not enough room to store logging in-between commit intervals.