Results 1 to 3 of 3
  1. #1
    Join Date
    May 2007
    Posts
    2

    Unanswered: OCI-21500: internal error code, arguments: [KGHALO2]

    Hi,

    In J2ee application running in OAS using oracle 9.0.1.6 OCI driver to connect to Oracle 9.2.0.5 database server. App server and database server both running on SUN solaris 9

    One of the code area, only in production server I am getting OCI-21500: internal error code, arguments: [KGHALO2] error when it get executed with execute batch statement (ps.executeBatch()) with large number of batch elements added to the batch (batch size is huge).

    With this error JVM is crashed, error is as follows

    kghalo bad size 0x048d222c
    ********** Internal heap ERROR KGHALO2 addr=0x0 *********


    ************************************************** ****
    HEAP DUMP heap name="Alloc environm" desc=0x117b744
    extent sz=0x1024 alt=56 het=32767 rec=0 flg=2 opc=2
    parent=0 owner=0 nex=0 xsz=0x1024
    EXTENT 0
    Chunk 25c7c70 sz= 1152 free " "
    Chunk 25c80f0 sz= 60 freeable assoc with mark prv=0 nxt=0
    Chunk 25c812c sz= 52 freeable assoc with mark prv=0 nxt=0
    Chunk 25c8160 sz= 52 freeable assoc with mark prv=0 nxt=0
    ...
    ...
    ...
    Chunk 119047c sz= 20 free " "
    Bucket 1 size=272
    Bucket 2 size=528
    Bucket 3 size=1040
    Chunk 25c7c70 sz= 1152 free " "
    Bucket 4 size=2064
    Bucket 5 size=4112
    Bucket 6 size=16400
    Bucket 7 size=32784
    Total free space = 2484
    UNPINNED RECREATABLE CHUNKS (lru first):
    PERMANENT CHUNKS:
    Chunk 118f540 sz= 24 perm "perm " alo=24
    Permanent space = 24
    ************************************************** ****
    Hla: 0

    OCI-21500: internal error code, arguments: [KGHALO2], [0x0], [], [], [], [], [], []


    Finding -

    During investigation I have found one bug 3649683 says Bug 3649683 OERI:KGHALO2 / OERI:KGHALP1 on a query with large number of predicates
    and patch is available for 9.2.0.5,

    But when I generated a test case to reproduce this issue with large number batch elements from executeBatch , I got different error, and never managed to reproduce it in the test case, the error reported in test case is


    java.sql.BatchUpdateException: ORA-24381: error(s) in array DML

    at oracle.jdbc.dbaccess.DBError.throwBatchUpdateExcep tion(DBError.java:451)
    at oracle.jdbc.driver.OraclePreparedStatement.execute Batch(OraclePreparedStatement.java:3753)
    at SQLTestSuite.insertLots(SQLTestSuite.java:121)
    at SQLTestSuite.main(SQLTestSuite.java:44)
    99999

    And this error is due to the limit set with in OCI driver some has 32k some version has 64K

    I can see this error only in production.

    From the bug above it sounds that issue is at oracle database server and that has been throwing unexpected error code due to which OCI driver and hence JVM is crashing.

    But from OCI-21500 number it sounds like error is occured in the client, rather than server. And from the error message KGHALO2 and memory dump it has produced it sounds like driver is not able to allocate more memory or driver has occured exception while allocating memory block /chunk.

    Anybody has clue on what it could be, if so could you please explain it

    Thanks

    Nirmal

  2. #2
    Join Date
    Aug 2003
    Location
    Where the Surf Meets the Turf @Del Mar, CA
    Posts
    7,776
    Provided Answers: 1
    Note:402755.1
    Known bug when > 64MB
    You can lead some folks to knowledge, but you can not make them think.
    The average person thinks he's above average!
    For most folks, they don't know, what they don't know.
    Good judgement comes from experience. Experience comes from bad judgement.

  3. #3
    Join Date
    May 2007
    Posts
    2
    The document note mensioned speaks about the similar error message but root cause of that error is to have message payload more than 64M.

    The process and hence transaction mensioned above which is failing in our case is also initiated by the MDB (triggered by the AQ_JMS message)

    But the message payload size is not greater than 64M, it is just 746 byte

    So I don't think we hit the issue due to payload as mensioned in Note.

    But now I am getting impression that native code of OCI driver is restricted to allocate max. limit of memory for its processing at runtime.

    And during execution of transaction the query might require more memory than its maximum limit.
    Do anybody know about if there any max memory size allocation limit on OCI driver ? if so then what it is

    Regards
    Nirmal

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •