Results 1 to 4 of 4
  1. #1
    Join Date
    Jun 2002
    Posts
    9

    Unanswered: Sql loader : Direct path / Unique index / ORA-01452

    Hello !

    Even if I've read the Oracle documentation, I don't understant what's wrong.
    I have a file.dat containing datas separated with ;
    Many lines in the file are similar.
    I want to load them with this control file and the direct path method :

    LOAD DATA
    INFILE DM_Famille.DAT
    APPEND
    INTO TABLE DM_Famille_temp
    FIELDS TERMINATED BY ';'
    TRAILING NULLCOLS
    (d_refsocapp,d_famille,d_desfamille)

    My table is defined as :
    CREATE TABLE DM_FAMILLE_TEMP (
    D_REFSOCAPP VARCHAR2(10) NOT NULL,
    D_FAMILLE VARCHAR2(10) NOT NULL,
    D_DESFAMILLE VARCHAR2(35) NOT NULL,
    CONSTRAINT DM_FAMILLE_PK_TEMP
    PRIMARY KEY ( D_REFSOCAPP, D_FAMILLE ))

    But ALL the .dat is loaded in the table (what about the primary key ?)
    and in the log file I have the ORA-01452: CREATE UNIQUE INDEX impossible; duplicates rows exist !

    For me the normal behaviour of the loader would be to insert in the tables only the lines that don't exist ?!

    Thx for help

  2. #2
    Join Date
    Apr 2002
    Posts
    6
    Hello !

    Even if I've read the Oracle documentation, I don't understant what's wrong.
    I have a file.dat containing datas separated with ;
    Many lines in the file are similar.
    I want to load them with this control file and the direct path method :

    LOAD DATA
    INFILE DM_Famille.DAT
    APPEND
    INTO TABLE DM_Famille_temp
    FIELDS TERMINATED BY ';'
    TRAILING NULLCOLS
    (d_refsocapp,d_famille,d_desfamille)

    My table is defined as :
    CREATE TABLE DM_FAMILLE_TEMP (
    D_REFSOCAPP VARCHAR2(10) NOT NULL,
    D_FAMILLE VARCHAR2(10) NOT NULL,
    D_DESFAMILLE VARCHAR2(35) NOT NULL,
    CONSTRAINT DM_FAMILLE_PK_TEMP
    PRIMARY KEY ( D_REFSOCAPP, D_FAMILLE ))

    But ALL the .dat is loaded in the table (what about the primary key ?)
    and in the log file I have the ORA-01452: CREATE UNIQUE INDEX impossible; duplicates rows exist !

    For me the normal behaviour of the loader would be to insert in the tables only the lines that don't exist ?!

    Thx for help

    Hi.

    If you want to append the rows that don't exist in the table already. You can take the following steps.

    1. Create a temporary table with the same structure as the original table. Do not create any constraints.
    2. Load all the data into a temporary table.
    3. Insert into the original table select * from the temporary table
    where the records don't exist in the original tables. See the example for step 3.

    Example :
    Original Table - DM_FAMILLE_TEMP
    Temporary Table - DM_FAMILLE_TEMP1 (do not create any constraint in this table)

    insert /*+ append */ into dm_famille_temp
    select * from dm_famille_temp1 a
    where not exists
    (select 1 from dm_famille_temp b
    where b.D_REFSOCAPP = a.D_REFSOCAPP
    and b.D_FAMILLE = a.D_FAMILLE
    and b.D_DESFAMILLE= a.D_DESFAMILLE);

  3. #3
    Join Date
    Jun 2002
    Posts
    9
    Thanks it seems to be a solution.
    But there is no way at all to use the direct path on a table with constraints to append rows ?
    Cause the interest of the direct path is the gain of time and using a temp table with a request (clause not exist) may be a little bit faster than using the conventional path ?!

    Jd

  4. #4
    Join Date
    Jun 2008
    Posts
    1

    Regarding Loading with Direct Path

    Quote Originally Posted by jdrulez
    Thanks it seems to be a solution.
    But there is no way at all to use the direct path on a table with constraints to append rows ?
    Cause the interest of the direct path is the gain of time and using a temp table with a request (clause not exist) may be a little bit faster than using the conventional path ?!

    Jd
    Hi jdrulez

    The problem in your case is the use of Direct path rather than conventional load.

    With conventional load if you have an existing table with constraints sqlldr will basically execute a lot of insert commands on your table and in your case will each duplicate record will fail to insert and be put in the bad file.

    With direct path sqlldr will effectively make a temporary table of the same structure as the target in your load, load all the data, try to deploy all the constraints and create the index block and then when it has the final data blocks that just need to be written into the temp tables' data store it will write them instead to the target tables' data store.

    Which is why it's failing with an error that tells you that it could not create a primary key constraint on the data it had.

    Use direct path only when you have absolute trust in the data being put in.

    If you want the speed, do some pre-processing on the data and do your checks there. 'sort -u' will work fast enough if your records are separated by new-lines.

    Best of Luck
    Last edited by nishantnahsin; 06-30-08 at 04:53.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •