Results 1 to 5 of 5
  1. #1
    Join Date
    Feb 2002
    Location
    Hamilton
    Posts
    150

    Unanswered: DB2 load utility errors

    DB2level is
    DB21085I Instance "arcprod" uses "64" bits and DB2 code release "SQL09012"
    with level identifier "01030107".
    Informational tokens are "DB2 v9.1.0.2", "s070210", "U810940", and Fix Pack
    "2".
    Product is installed at "/usr/opt/db2_09_01".
    OS is
    AIX ksh [odadmin@archa15] export $ uname -a
    AIX archa15 3 5 00CEBB6C4C00

    I am trying to run a bunch of loads into database, an example is below

    db2 "load from FBA1.ixf of ixf savecount 5000 insert into ODADMIN.FBA1"
    db2 "load from FCA1.ixf of ixf savecount 5000 insert into ODADMIN.FCA1"
    db2 "load from GAA1.ixf of ixf savecount 5000 insert into ODADMIN.GAA1"
    db2 "load from GBA1.ixf of ixf savecount 5000 insert into ODADMIN.GBA1"
    db2 "load from GCA1.ixf of ixf savecount 5000 insert into ODADMIN.GCA1"
    db2 "load from HAA1.ixf of ixf savecount 5000 insert into ODADMIN.HAA1"

    But I pipe the output to a text file and I see messages similar to this in it

    SQL3109N The utility is beginning to load data from file
    "/archive/prod/arscache/cache1/export/ZBA1.ixf".

    SQL3500W The utility is beginning the "LOAD" phase at time "10/18/2007
    08:30:47.255813".

    SQL3150N The H record in the PC/IXF file has product "DB2 02.00", date
    "20071017", and time "163049".

    SQL3153N The T record in the PC/IXF file has name "ZBA1.ixf", qualifier "",
    and source " ".

    SQL3519W Begin Load Consistency Point. Input record count = "0".

    SQL3520W Load Consistency Point was successful.

    SQL0968C The file system is full. SQLSTATE=57011

    SQL0968C The file system is full. SQLSTATE=57011

    The only filesystem that is full is

    /dev/hd4 393216 0 100% 2394 81% /

    What could possibly be happening that is filling up the root filesystem?

    thanks

    Mark

  2. #2
    Join Date
    Jun 2003
    Location
    Toronto, Canada
    Posts
    5,516
    Provided Answers: 1
    Surprisingly, LOAD generates quite a few temporary files, which reside by default under the database directory. You can specify a different location with the TEMPFILES PATH option.
    ---
    "It does not work" is not a valid problem statement.

  3. #3
    Join Date
    Feb 2002
    Location
    Hamilton
    Posts
    150
    I modified my script to include the tempfiles path option

    db2 "load from XAA1.ixf of ixf savecount 5000 tempfiles path /archive/prod/arscache/cache2 insert into ODADMIN.XAA1"
    db2 "load from XBA1.ixf of ixf savecount 5000 tempfiles path /archive/prod/arscache/cache2 insert into ODADMIN.XBA1"
    db2 "load from YAA1.ixf of ixf savecount 5000 tempfiles path /archive/prod/arscache/cache2 insert into ODADMIN.YAA1"
    db2 "load from YBA1.ixf of ixf savecount 5000 tempfiles path /archive/prod/arscache/cache2 insert into ODADMIN.YBA1"
    db2 "load from ZAA1.ixf of ixf savecount 5000 tempfiles path /archive/prod/arscache/cache2 insert into ODADMIN.ZAA1"
    db2 "load from ZBA1.ixf of ixf savecount 5000 tempfiles path /archive/prod/arscache/cache2 insert into ODADMIN.ZBA1"

    But it still filled up / during the process, although I did get an error the first time I tried to run it saying it couldn't access my path above. I modified permissions and it started to run without issue until / filled to 100% again

    any ideas??

    thanks

  4. #4
    Join Date
    Jun 2003
    Location
    Toronto, Canada
    Posts
    5,516
    Provided Answers: 1
    Does it still fail during the load phase, or at the build phase? If it's the latter then may be dropping some of the indexes on the affected table could help. Unless you determine precisely what files fill up the filesystem I can't give you a better answer.
    ---
    "It does not work" is not a valid problem statement.

  5. #5
    Join Date
    Feb 2002
    Location
    Hamilton
    Posts
    150
    We figured it out, a directory was created under / instead of as a FS as requested by our wonderful unix team and I didn't notice it because i was doing df -k / instead of df -k on every filesystem

    thanks for the help though with the temp file path that came in handy

    Mark

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •