Results 1 to 9 of 9
  1. #1
    Join Date
    Aug 2004
    Posts
    19

    Unanswered: Avoid duplicate rows error in sqlldr

    Hi,

    Is there any way in sqlldr (SQL Loader) to avoid getting duplicate rows error (ORA-00001)? We have unique key constraint on the table, but during error handling sqlldr shows 999 rows not loaded becuase of duplicate rows...We are trying to avoid getting this error in log file.

    Any help??

    Thanks.

  2. #2
    Join Date
    Sep 2002
    Location
    UK
    Posts
    5,171
    Provided Answers: 1
    So what do you want SQL Loader to do instead?

  3. #3
    Join Date
    Aug 2003
    Location
    Where the Surf Meets the Turf @Del Mar, CA
    Posts
    7,776
    Provided Answers: 1
    >We are trying to avoid getting this error in log file.
    Disable the PK prior to invoking SQLLDR.
    Remove the duplicate rows from the file prior to loading.
    You can lead some folks to knowledge, but you can not make them think.
    The average person thinks he's above average!
    For most folks, they don't know, what they don't know.
    Good judgement comes from experience. Experience comes from bad judgement.

  4. #4
    Join Date
    Jul 2003
    Posts
    2,296
    Do you WANT sqlloader to load duplicate rows?
    Or you just don't want the errors?

    You can always increase the ERRORS parameter which will then load all non-duplicate rows from your flat file into the table. It sounds more like sqlloader is crapping out (exiting from the load) because you are hitting the default ERROR limit.
    - The_Duck
    you can lead someone to something but they will never learn anything ...

  5. #5
    Join Date
    Aug 2004
    Posts
    19
    I don't want sqlldr to load the duplicate rows and avoid error in log file.
    How does it will help with "ERRORS parameter" and how to use it?

    More specific to the problem, I am trying to find out something similer for "create unique index.... with ignore_dup_row" in Sybase. When Sybase use this type of index while loading data (bcp in - silmiler to sqlldr), it ignores duplicate rows and does not give any error...

    Thanks,

    Jagannath

  6. #6
    Join Date
    Aug 2003
    Location
    Where the Surf Meets the Turf @Del Mar, CA
    Posts
    7,776
    Provided Answers: 1
    How do you decide which row(s) to throw away?
    What happens if/when the PK values match but the other columns have different values?
    You can lead some folks to knowledge, but you can not make them think.
    The average person thinks he's above average!
    For most folks, they don't know, what they don't know.
    Good judgement comes from experience. Experience comes from bad judgement.

  7. #7
    Join Date
    Aug 2004
    Posts
    19
    Based on the PK

  8. #8
    Join Date
    Jun 2004
    Location
    Liverpool, NY USA
    Posts
    2,509
    I think what he is saying is that sybase is a copout. Oracle allows you to ignore duplicate data, it just counts them. What is the big deal with oracle telling you that it found duplicates? It still doesn't save them. If they really bother you and you are on Oracle 9.2, reference the file as an external table and define in your insert select how to ignore the duplicates. But again, who cares if duplicates are found?
    Bill
    You do not need a parachute to skydive. You only need a parachute to skydive twice.

  9. #9
    Join Date
    Jul 2003
    Posts
    2,296
    ERRORS (errors to allow)
    Default: To see the default value for this parameter, invoke SQL*Loader without any parameters, as described in Invoking SQL*Loader.

    ERRORS specifies the maximum number of insert errors to allow. If the number of errors exceeds the value specified for ERRORS, then SQL*Loader terminates the load. To permit no errors at all, set ERRORS=0. To specify that all errors be allowed, use a very high number.

    On a single-table load, SQL*Loader terminates the load when errors exceed this error limit. Any data inserted up that point, however, is committed.

    SQL*Loader maintains the consistency of records across all tables. Therefore, multitable loads do not terminate immediately if errors exceed the error limit. When SQL*Loader encounters the maximum number of errors for a multitable load, it continues to load rows to ensure that valid rows previously loaded into tables are loaded into all tables and/or rejected rows filtered out of all tables.

    In all cases, SQL*Loader writes erroneous records to the bad file.
    - The_Duck
    you can lead someone to something but they will never learn anything ...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •