Results 1 to 8 of 8
  1. #1
    Join Date
    Oct 2003
    Posts
    3

    Unhappy Unanswered: 9i slow on Solaris, but fast on windows

    HELP! Any ideas helpful

    We are having the strangest problem. A customer has a 46K line INSERT script with a COMMIT following each INSERT. He installed Oracle 9i on his P4 windows desktop and was able to run it in about 5.5 minutes. He ran it on a sunfire 3800 with 6 CPUs and 24 GB of RAM and it takes 13 minutes.

    He's not a DBA, so what could it be to cause such poor performance on the vaunted Solaris (enterprise hosting platform). Tunings have been applied to the solaris kernel as recommended. Here are relevant portions of init.ora:

    compatible = 9.2.0
    control_files = (/icrdev/TEST/oradata/control01.ctl, /icrdw/TEST/oradata/control02.ctl)
    db_block_size = 8192
    db_cache_size = 512M
    db_keep_cache_size = 16M
    dml_locks = 5000
    db_domain = WORLD
    db_name = TEST
    fast_start_mttr_target = 300
    global_names = FALSE
    hash_area_size = 26214400
    instance_name = TEST
    java_pool_size = 50M
    large_pool_size = 5M
    open_cursors = 500
    processes = 300
    shared_pool_size = 1024M
    shared_pool_reserved_size= 500M
    log_buffer = 52428800
    pga_aggregate_target = 1024M
    workarea_size_policy = AUTO
    #undo tbs
    undo_management = AUTO
    undo_retention = 10800
    undo_tablespace = UNDOTBS
    user_dump_dest = /u01/app/oracle/admin/TEST/cdump
    background_dump_dest = /u01/app/oracle/admin/TEST/bdump
    core_dump_dest = /u01/app/oracle/admin/TEST/cdump

  2. #2
    Join Date
    Mar 2002
    Location
    Reading, UK
    Posts
    1,137
    1) Is he actually executing the script on the solaris box or is he executing it in sqlplus on his PC (i.e. each transaction has to go over the network).
    2) Are both boxes under the same load? Or is the Solaris box used by other apps/users unlike his own PC.
    3) The CPU on a PC is pretty darn fast, the big advantage of the solaris box is when you have tens/hundreds of users simultaneously hitting it.
    4) the log buffer looks too big, having too large a log buffer affects the commit response time and if your committing after every insert then you will be hit by this.
    5) Other things which can effect performance includes stuff like MTS, schema setup, disk configuration (they arent using RAID 5 for the redo logs are they) ...

    Alan

  3. #3
    Join Date
    Oct 2003
    Posts
    3

    Question

    Alan,

    Thanks for the reply....

    Customer was locally executing the INSERT on his PC. We loaded the INSERT script on the solaris system and ran from sqlplus locally.

    The solaris box is close to idle. It looks like the sqlplus script is using less than 10% of a single CPU and there is only one DB connection open. Is there a way to spread out the load on the solaris host? It seems like it would do much better if there was more than one CPU.

    Log buffer suggestion noted. How small can we make it? 256K, 1MB?

    I think we worked through the OS configuration, disk controller thing pretty hard.

    Has anyone ever had a problem with file descriptors (per process files open) being insufficient on Solaris? Set at 256/1024 right now.

    Thanks again...

    James

  4. #4
    Join Date
    Mar 2002
    Location
    Reading, UK
    Posts
    1,137
    Its a bit difficult to say without seeing what kind of apps/loads the system is designed for but try a log_buffer of 256K or 512K. Also try increasing the fast_start_mttr_target to 1000 to reduce checkpointing.

    However unless this script is the primary use for the database I wouldnt worry too much as the database is designed for heavy multiuser loads or huge queries not silly little single user insert scripts which commit after every insert : ).

    Alan

  5. #5
    Join Date
    Mar 2002
    Location
    Reading, UK
    Posts
    1,137
    Forgot to mention, that this sort of script isnt really going to parallelize easily unless you split it into n different scripts and execute in parallel. Or the insert statements are actually doing large selects against multiple large tables in which case you might be able to configure your database to execute each sql in multiple parallel threads (see the datawarehousing manual for more info on setting up parallel execution).

    Alan

  6. #6
    Join Date
    Jul 2003
    Posts
    2,296
    Can you use /*+ APPEND */ hint in your insert statement to do a direct insert?

    Also, I recommend committing every 10k rows or something along those lines. That is easy to do with a counter and something like this:
    PHP Code:
    (declare section)
    vRecCounter  NUMBER := 0;
    vCommitPoint NUMBER := 10000;

    (
    then at the end of your loop)

    IF 
    MOD(vRecCountervCommitPoint) = 0 THEN
    COMMIT
    ;
    DBMS_OUTPUT.PUT_LINETO_CHAR(SYSDATE'HH24:MI:SS') || ' Commit Reached Rows = (' || TO_CHAR(vRecCounter) || ')' );        
    END IF; 
    - The_Duck
    you can lead someone to something but they will never learn anything ...

  7. #7
    Join Date
    Mar 2004
    Posts
    7
    i m having the same kind of problem...

    have u managed to solve this ??

  8. #8
    Join Date
    Oct 2003
    Posts
    3
    We are still working on this issue at a greatly reduced pace. With a self-imposed restriction on not changing the code in use, we had a big pow-wow with Sun/Oracle, etc to discuss. The basic answer was that as constructed, this test was single threaded and CPU bound. They posited that the faster the CPU, the faster the test would run. We obtained a loaner system from Sun (V65, 3.06 Ghz CPU) and are about to retest.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •