Results 1 to 6 of 6
  1. #1
    Join Date
    Jan 2003
    Location
    Atlanta
    Posts
    134

    Angry Unanswered: Import taking like forever

    Hi,

    I have an exported file about 11 GB. I am trying to import the file into schema.

    The first command I used is ( to import only definitions )

    imp sys/%PWD%@%ORACLE_SID% fromuser=xxx touser=xxx ignore=y rows=n indexes=y grants=y commit=y buffer=5000000 file=%O_DATA_PATH%\%FILE% log=%O_DATA_PATH%\%ORACLE_SID%import.log

    Then I am using ( to get rows)

    imp sys/%PWD%@%ORACLE_SID% fromuser=xxx touser=xxx ignore=y rows=y indexes=n grants=n commit=y buffer=5000000 file=%O_DATA_PATH%\%FILE% log=%O_DATA_PATH%\%ORACLE_SID%import.log

    It almost seems to be working on one table ( which has 14 million records ) since yesterday ( for almost 12 hours). The records in the table are being imported as I execute count(*) from that table.

    Is there anything wrong as it is taking more than 24 hours to import 11 GB of imported file? How do I correct this situation to complete? The box has memory 366952 K and available 2759588 K and CPU is ranging from 7 to 50%.

    Any help is appreciated.

    Vinnie

  2. #2
    Join Date
    Apr 2003
    Location
    Greenville, SC (USA)
    Posts
    1,155
    For large tables, I would drop indexes, offline all small rollback segments and have a LARGE rollback segment online, import, rebuild indexes ... maybe even have the table to import into setup for nologging as you are probably archiving like crazy with this... Are the tablespaces extending at "small clips" ???

    HTH
    Gregg

  3. #3
    Join Date
    Dec 2003
    Location
    Oklahoma, USA
    Posts
    354
    Agree with Gregg. Vinnie, when you did the first import, you did it with the IMPORT=Y parameter. You also imported any constraints on the tables and unless you did something about this fact, you have all constraints (indexes, fk, etc) enabled when doing the second import.

    JoeB

  4. #4
    Join Date
    Jan 2003
    Location
    Atlanta
    Posts
    134
    Originally posted by gbrabham
    For large tables, I would drop indexes, offline all small rollback segments and have a LARGE rollback segment online, import, rebuild indexes ... maybe even have the table to import into setup for nologging as you are probably archiving like crazy with this... Are the tablespaces extending at "small clips" ???

    HTH
    Gregg
    Hi Greg,

    Thanks for the suggestion. I verified again that there are no indexes on this table at all and the logging is turned off. Is there anything else that I could do to speed-up the process? I know that as a thumb rule, import takes about 30-45 minutes per 1 GB data but this is abnormally slow. Any help is appreciated.

    Vinnie

  5. #5
    Join Date
    Apr 2003
    Location
    Greenville, SC (USA)
    Posts
    1,155
    What are the current # of extents on the table being imported?
    What is the next_extent size for the table?

    The tablespace ... what kind of free space does it have. If it needs to
    extend, what is it's next size... Are there other constraints that have to be validated for each record being inserted ?

    Gregg

  6. #6
    Join Date
    Jan 2003
    Location
    Atlanta
    Posts
    134
    Originally posted by gbrabham
    What are the current # of extents on the table being imported?
    What is the next_extent size for the table?

    The tablespace ... what kind of free space does it have. If it needs to
    extend, what is it's next size... Are there other constraints that have to be validated for each record being inserted ?

    Gregg
    Hi,

    This is a schema import and the tablespace is locally mananged. It's size is 13 GB. No redo logs are generated option set for now. There are no other constraints on the hanging table. Any other thoughts?

    Thanks
    Vinnie

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •