Results 1 to 8 of 8
  1. #1
    Join Date
    Jan 2003
    Posts
    23

    Unanswered: loading lots of data & checkpoints

    I've been searching around looking for any ideas on this - as some may know, I've had to reload data into a database. The problem is I cannot take this db offline to do it. It is an oltpish system so it cannot hang very much. This leads to the upcoming problem:

    is there _anything_ I can do to help soften the blows of inserting tons of data into informix in regards to checkpoints? I'm running the horrific 7.30UC3 I've been banging my head tweaking checkpoint settings, LRU's, min/max's. I cannot seem to get those dang checkpoints down. (On my test system I had them at about 15 seconds every 5 minutes, but on the production system it was 40-80 seconds every 5 minutes) any tips/tricks? (LRUS 127, CLEANERS 127, MIN 0 MAX 1)

    I know 9.whatever has "fuzzy" checkpoints that can help, but 9 is not an option for me.

    thanks - you guys have been most useful in the past.

  2. #2
    Join Date
    Aug 2002
    Location
    Bonn/Germany
    Posts
    152
    You might use the HighPerformanceLoader in express mode
    for this.

    In express mode the data will be directly written to the
    pages on disk not going thru the buffer pool.
    This activity will not be logged.

    However the table will be exclusively locked during this
    operation and afterwards you need to perform a level-0-archive
    in order to write to the table (it is marked "read-only" after
    the HPL jobs completes).

    This can avoid long checkpoints during loading.
    Be aware of the fact that you have huge i/o activity during
    the load (because of the HPL), so this might extend your
    checkpoints a little bit.

    Unfortunately you are on version 7.3, so you have to use
    "ipload" to configure the HPL jobs which could be quite
    an amazing experience.

    Starting with 9.21 you have "onpladm", a command line interface
    to HPL.

    HTH.

    Best regards

    Eric
    --
    IT-Consulting Herber
    WWW: http://www.herber-consulting.de
    Email: eric@herber-consulting.de

    ***********************************************
    Download the IFMX Database-Monitor for free at:
    http://www.herber-consulting.de/BusyBee
    ***********************************************

  3. #3
    Join Date
    Jan 2003
    Posts
    23
    I'd love to use hpl in express mode.
    but I have a text blob, which effectivly kicks hpl in the gut.
    It has to revert to normal mode

    there's an x interface to hpl you can use too..
    although it doesn't so much work for me.. oh well.

    but thanks for the reply

  4. #4
    Join Date
    Aug 2002
    Location
    Bonn/Germany
    Posts
    152
    Hmm, without HPL it's difficult.

    Two possible but not always practicable solutions:

    1) SlowDown the load, so that the page cleaners are able
    to keep up with the load (assuming that
    LRU_MIN_DIRTY/LRU_MAX_DIRTY
    is already very very low)
    -> you might for example split the load file into several smaller
    files and load them one after another with a sleep period
    between the loads
    -> or writing you own load program (with Perl/DBI this should
    be fairly simple) and sleeping from time to time

    2) Build a seperate instance, containing only this table and create
    a view or synonym in the original database to this new
    instance.
    -> this ensures that you have your own dedicated bufferpool,
    thus not filling the "main" bufferpool with dirty pages

    I really hate to say this, but here DB2 has an advantage over
    Informix because in DB2 you could build several bufferpools
    and exclusively assign tables to them.

    Best regards

    Eric

    Best regards

    Eric
    --
    IT-Consulting Herber
    WWW: http://www.herber-consulting.de
    Email: eric@herber-consulting.de

    ***********************************************
    Download the IFMX Database-Monitor for free at:
    http://www.herber-consulting.de/BusyBee
    ***********************************************

  5. #5
    Join Date
    Jan 2003
    Posts
    23
    thanks. yeah. I think I will put a throttle into the application.
    it'll make it run longer, but if it won't make the site suck(tm) then it should be worth it.

  6. #6
    Join Date
    Aug 2002
    Location
    Belgium
    Posts
    534
    Do you have any idea why yr checkpoints are so long?
    Is your IO slow? Are your BUFFERS too big to handle?

    1. IO slow...

    Try fragmenting the tables. It will speed up your loading and checkpoint process. Check with 'onstat -g ioq -> maxlen' if you don't have KAIO to see what dbspace is incrementing the checkpoint time. If you use kernel io, check with onstat -D and onstat -g iof.
    Spreading data over several disks, even several dbspaces on the same disk helps.

    2. BUFFERS too big.

    I know that your version doesn't support Fuzzy Checkpoints, so did you try bringing th enumber of BUFFERS down? It will automatically slow down your load process...
    rws

  7. #7
    Join Date
    Jan 2003
    Posts
    23
    well, the original data is on raw disks (22 disks) going onto cooked files on a couple disks (stupid 2G limit).

    lowering buffers kills performance of the live services too much to lower

    I also have suspicions that the machine is going to die..
    I was so desparate I made my copier limit the rate I was inserting kb/sec... now according to bonnie the new slice can rewrite 14MB a sec.. as it turns out I was only feeding it 500k/sec and I was still getting 60 second checkpoints..

    (the loading of the data btw is to fragment the table - I cannot use alter table because 1. not enough logical log space 2. too long)

    on the upside, I've managed to get a bandaid together that I originally tried to do working -
    idea is to use a view that does a union between two identical tables. The issue is there is a text blob.
    originally I did

    create view blah as select * from tableold union select * from tablenew

    but it complained that you can't have a text blob in a union. so I gave up.. later I was retracing my steps trying to find any way I could.. for some odd reason I used "union all" instead and it worked...

    performance is a bit slower but it should be sufficient enough to keep things happy until either 9.4, or I somehow manage to get this table copied to another machine.

    what a pain..

  8. #8
    Join Date
    Aug 2002
    Location
    Belgium
    Posts
    534
    Well, it seems to me you will have to wait for the 9.4 release.
    For the moment GA date is planned March, 31st.
    rws

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •