Results 1 to 4 of 4
  1. #1
    Join Date
    Jul 2004

    Angry Unanswered: SQL*Loader discrepancies.

    A followup to my earlier SQL*Loader queries:

    For now, i'm using conventional path loads because they're faster than loading the data with the direct method and later, finding and deleting rows that violate a unique key so that i can enable it.

    There is a column in the table that was a sequence-generated key that we were using at the time that we loaded the table. The performance was poor (and i was trying to use direct path which disallows "sequence_name.nextval"), so i started to try different combinations of "recnum" and "sequence(count)" for the source of the sequence. We did see a performance improvement by using this method, but then, i found that we began to get DUPLICATE sequence values created.

    I've been trying different combiniations of parallel/no parallel and recnum/sequence(count) to see if some combination might work, but no dice so far. Can anyone tell me what might be happening?

    A little more detail on my "control test":
    I'm loading 3 files into a table (i actually need to load 7 in real life). Together, the three files load 4,837,791 records (42 million in the actual 7 files) . However, when i run a query to select the unique ID values, i come up with one less than that (actually close to 1 million duplicates in the full load). So, when i try and enable the PK on that column it fails because there are duplicates.

    At first, i thought it might be parallel nodes grabbing RECNUMs so quickly, that it was grabbing the same one multiple times. However, if i disable parallelism, i still get the error.

    Can anyone please tell me what is happening here?


  2. #2
    Join Date
    Aug 2003
    Where the Surf Meets the Turf @Del Mar, CA
    Provided Answers: 1
    Rhetorical questions -

    Why must the DB/SQLLDR "assign" the sequence number(s)?
    It would be a trivial PERL program to add a "sequence number" to the flat file prior to loading.

    What's wrong with alternatively load all the data (ignoring duplicate values & such) then UPDATE every row with unique values?
    You can lead some folks to knowledge, but you can not make them think.
    The average person thinks he's above average!
    For most folks, they don't know, what they don't know.
    Good judgement comes from experience. Experience comes from bad judgement.

  3. #3
    Join Date
    Aug 2004
    Did you try ti set up a trigger on the table that will assign unique values from a sequennce (since you are doing conventional load)? Also to improve the perfomance make less COMMITS (loas more rows i between COMMITS).

  4. #4
    Join Date
    Jan 2004
    Croatia, Europe
    Provided Answers: 5
    I read that you tried to use "sequence (count)". Part of my control file looks like this:
    INTO TABLE vod_paket
    (  rbr_izv                         SEQUENCE (MAX, 1),   
    Perhaps you could try this too. However, I think it wouldn't improve speed if loader has to determine max value while inserting 42 million rows ... My files to be loaded are much smaller (up to 2000 rows) so I didn't bother much about loading time.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts