I was wondering what the fastest way is to load data from 32 tables from a DB on one box over a 100 mps network to another db2 database on another machine. Both have sun 3300 arrays with 6 disks. Right now the method is a select * from one db to a insert into over another. It's taking 30 minutes to get 1 gig of data over. cpu/mem/disk io are fine. Is there a way to better utilize
the machines to do this faster? Would several tables at once be better than one?
Here's one option we often use in Oracle (haven't had a need to use it in DB2 yet). I am assuming these are UNIX machines !
1. Catalog the target DB on the source DB instance.
2. Create a pipe file on the source machine mknod -p <table-1>
3. Use import/load to load data into the target db as follows -
db2 "load client from <table-1> of del ......"
db2 "import from <table-1> of del ...."
This will initiate the load/import process,
but it will hang for data to be pushed into the pipe.
4. db2 "export to table-1> of <del> ......
This will start the export, and it will also put in motion the load/import
of data into the target table.
5. Repeat the above process for ALL the tables.
I would suggest scripting the above - so you do not have to "baby-sit" the process.