Results 1 to 7 of 7
  1. #1
    Join Date
    Nov 2011
    Posts
    334

    Unanswered: how to improve resultset fetch performance

    aix 5.3 and db2 for luw 9.1
    Here is the problem
    for some reason , I want to retrieve all rows from a large table .
    the code is like this ( java app, use jdbc to connect db ):
    Code:
    Statement stmt = con.createStatement();
    sql = "SELECT * FROM tt ";
    ResultSet rs = stmt.executeQuery( sql );
    while (rs.next())
          {
           //just get field data      
            rs.getInt(1);
            rs.getInt(2);
            rs.getString(3);
            rs.getString(4);
            rs.getString(5);
            rs.getString(6);
            rs.getDate(7);
            rs.getDate(8);
          }
          rs.close();
          stmt.close();
    table tt's ddl is :
    Code:
    create table tt ( c1 int, c2 int, c3 char(10), c4  char(10), c5 varchar(10) , c6 varchar(10), c7 date, c8 date)
    it takes about 61 seconds for retrieve 1 milion rows,
    That is too slowly for me.
    but if i remove the code :
    Code:
            rs.getInt(1);
            rs.getInt(2);
            rs.getString(3);
            rs.getString(4);
            rs.getString(5);
            rs.getString(6);
            rs.getDate(7);
            rs.getDate(8);
    I just need 9 seconds.

    so does any body know how to impove the rs.getXXX's performance?

  2. #2
    Join Date
    Apr 2012
    Posts
    1,034
    Provided Answers: 18
    Does the setFetchSize method help at all ?

    Might want to make TYPE_FORWARD_ONLY and CONCUR_READ_ONLY explicit, if they are not default.

    If you can avoid codepage conversions (for the getstring()) that helps (only a little).

    But the smartest thing might be to consider alternative ways to handle bulk data (if performance is your main focus).

  3. #3
    Join Date
    Jun 2003
    Location
    Toronto, Canada
    Posts
    5,516
    Provided Answers: 1
    See if you can improve it by increasing the cursor block size (Specifying row blocking to reduce overhead). Also, specifying OPTIMIZE FOR n ROWS might help if "n" is enough to fill the entire block.

  4. #4
    Join Date
    Nov 2011
    Posts
    334
    Thanks for your response...
    I have tried to adjust fetchsize and cursor block size ,but the improvement is very insignificant...
    So i think maybe the problem is not on database side,
    maybe the java object create and convert ( eg, rs.getdate will create a new date object)
    consumed most of the time.
    maybe i should ask some guys who are familiar with java。
    thx again.

  5. #5
    Join Date
    Feb 2008
    Location
    Japan
    Posts
    3,483
    Quote Originally Posted by db2mor View Post
    ...

    But the smartest thing might be to consider alternative ways to handle bulk data (if performance is your main focus).
    I agree with db2mor.

    I'm interesting how fengsun2 want to handle the bulk data after retrieving through cursor.
    If write to a file to pass it to other program(s), EXPORT might be better.
    If do some calculations and/or sum up them, do most of them in an SQL might be better.

  6. #6
    Join Date
    Nov 2011
    Posts
    334
    We need to passed each fetched row to another subsystem.
    which used the tools called rule engine。
    It support to use very complex rules to proccess each rows 。
    And the tool provide a GUI which can used directly by user to
    configure the rules.

  7. #7
    Join Date
    Feb 2008
    Location
    Japan
    Posts
    3,483
    Are there any way to pass bulk data(1 milion rows?) to the tools by a file?

    Another idea is to execute EXPORT by ADMIN_CMD procedure, then read the output file and pass each record to the tools.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •