Results 1 to 2 of 2
  1. #1
    Join Date
    May 2007

    Unanswered: SQL*Loader Losing Records

    I'm trying to load several large (~2.5m records) files into Oracle 10g via SQL*Loader. Most files load properly, but occasionally only a portion of the data will load. For example, a recent attempt resulted in about 200k records out of 2.5m loaded. The log file shows no rejected records, there was no error, and no .bad file was produced. Furthermore, a file that loads correctly one time might get truncated on another attempt or vice-versa. One odd thing I noticed is that the final few commits add very few new records before the load terminates:

    Commit point reached - logical record count 203800
    Commit point reached - logical record count 203864
    Commit point reached - logical record count 203928
    Commit point reached - logical record count 203932
    Commit point reached - logical record count 203933

    Does anyone know what might be the problem?


    - Elliot

  2. #2
    Join Date
    Aug 2003
    Where the Surf Meets the Turf @Del Mar, CA
    Provided Answers: 1
    I don't doubt that a problem exists.
    I do doubt that ZERO clues are left behind to help solve the mystery.
     sqlldr help=yes
    userid          ORACLE username/password
    control         Control file name
    log             Log file name
    bad             Bad file name
    data            Data file name
    discard         Discard file name
    discardmax              Number of discards to allow
    skip            Number of logical records to skip
    load            Number of logical records to load
    errors          Number of errors to allow
    rows            Number of rows in conventional path bind array or between direct path data saves
    bindsize                Size of conventional path bind array in bytes
    silent          Suppress messages during run (header,feedback,errors,discards,partitions)
    direct          use direct path
    _synchro                internal testing
    parfile         parameter file: name of file that contains parameter specifications
    parallel                do parallel load
    file            File to allocate extents from
    skip_unusable_indexes           disallow/allow unusable indexes or index partitions
    skip_index_maintenance          do not maintain indexes, mark affected indexes as unusable
    commit_discontinued             commit loaded rows when load is discontinued
    _display_exitcode               Display exit code for SQL*Loader execution
    readsize                Size of Read buffer
    external_table          use external table for load; NOT_USED, GENERATE_ONLY, EXECUTE
    columnarrayrows         Number of rows for direct path column array
    streamsize              Size of direct path stream buffer in bytes
    multithreading          use multithreading in direct path
    resumable               enable or disable resumable for current session
    resumable_name          text string to help identify resumable statement 
    resumable_timeout               wait time (in seconds) for RESUMABLE
    date_cache              size (in entries) of date conversion cache
    Without some specific failure, I'll decline to speculate; however my 1st guess is that ERRORS limit is being hit.
    You can lead some folks to knowledge, but you can not make them think.
    The average person thinks he's above average!
    For most folks, they don't know, what they don't know.
    Good judgement comes from experience. Experience comes from bad judgement.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts