Results 1 to 10 of 10
  1. #1
    Join Date
    Nov 2004
    Posts
    14

    Question Unanswered: ODBC == memory killer???

    Hi,

    I'm using MySQL. My Server is 4.1.7-nt, the odbc driver is 3.51.10. Every time I open a query the whole data is copied into memory. Here are two examples which use odbc:

    Java:

    Code:
    Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
    Connection conn = null;
    conn = DriverManager.getConnection("jdbc:odbc:mysql");
    Statement stmt = conn.createStatement();
    ResultSet rs = stmt.executeQuery("select * from fid");
    Delphi:

    Code:
    table:= TTable.Create(nil);
    table.databaseName:= 'mysql';
    table.TableName:= 'fid';
    table.readOnly:= true;
    table.open;
    After the both statements the whole data (700 mb) is put into memory (on client side). The following flags of the odbc are activated:
    'Don't optmize column with' and 'Return matching rows'

    I've tried many flag configurations but nothing helped. Is this a bug or a feature? If this is a feature, how can I disable this feature?

    Greetings,
    Markus

    PS: I've tried the 'Allow Big Results' flag but this doesn't help either. If I try 'dont cache result' no application is working any more....
    Last edited by Markus123; 11-29-04 at 06:20.

  2. #2
    Join Date
    Nov 2004
    Location
    out on a limb
    Posts
    13,692
    Provided Answers: 59
    looks like its doing what it should be doing, after you have requested everything form table fid in the recordset? If fid comprises 700mb of data then thats what should happen .
    you could try to reduce the amount of space by tuning the query to retrieve only the rows AND columns required

    ie if you just want an order number from a large table then only select the orderno column, if you can reduce the number of rows returned even better eg where orderdate between x and Y

  3. #3
    Join Date
    Nov 2004
    Posts
    14
    Hi healdem,

    but why doesn't odbc fetch eg only 30 rows? Isn't it possible to set a max rows that will be fetched? After I've processed those 30 rows the next 30 rows will be fetched...

    Greetings,
    Markus

  4. #4
    Join Date
    Nov 2004
    Location
    out on a limb
    Posts
    13,692
    Provided Answers: 59
    if you have a prime key which is sequential and you wnatr to limit the number of rows returned you could do something similar to

    Select RowID,Column1,Coumn2 from Table
    Order by RowID
    Limit <previousMaxROWIDReturned>,30;

    Or

    Select RowID,Column1,Coumn2 from Table
    Where RowID>Threshold
    Limit 30;

    You may have to RTFM the manual for full details of the LIMIT predicate

  5. #5
    Join Date
    Nov 2004
    Posts
    14
    Hi healdem,

    sorry, but using LIMIT is no solution for me, because the existing application has already too many SQL calls so that I can't recode everthing.

    If there is no possibility to limit the buffer size of the ODBC I have to use the old DBMS ;(

    Greetings,
    Markus

  6. #6
    Join Date
    Nov 2004
    Posts
    14
    Hi everybody out there

    I've done some investigations on my problem, on MySQL and on MyODBC. And here is my abstract about it:

    • MyODBC supports cursor to the application
    • But MyODBC doesn't support cursor to the server
    • That's why MyODBC stores the complete query result in local memory when the query is send - even if the data is not needed yet by the application.
    • MyODBC can be configured to do not cache the result, but many applications (written in Java, Delphi, C#) won't work any more.
    • Application programmers must handle big result sets by using LIMIT
    • Processing huge data sets on client side is limited by client RAM and not by server RAM, even if the client is processing row by row and drops the 'old' loaded data.


    Can you affirm these thesis or is there something wrong? Perhaps this would help other programmers, too....

    Thanks,
    Markus

  7. #7
    Join Date
    Aug 2004
    Location
    France
    Posts
    754
    Hello Markus123,

    First of all, thanks for pointing this interesting problem.

    Now, why don't you try the Connector/J JDBC driver instead of the ODBC one (since you seem to use Java) ? It seems that this one supports stmt.setFetchSize(int NbRows), which is what you are looking for : it makes the statement fetch NbRows at a time on the server, only when needed.

    Regards,

    RBARAER

  8. #8
    Join Date
    Nov 2004
    Posts
    14
    Hi RBARAER,

    I've used ODBC in my Java example, because I wanted to test the ODBC.

    Quote Originally Posted by RBARAER
    It seems that this one supports stmt.setFetchSize(int NbRows), which is what you are looking for : it makes the statement fetch NbRows at a time on the server, only when needed.
    That's true, the Java Statement supports fetch size, but the Conntector/J doesn't. I've used the JDBC and set fetch sizes of 100 downto 1 - it still loads the complete data into memory. This bears me out in my thesis: MySQL doesn't support cursor to its connectors: They load the complete result data into local memory.

    Thanks,
    Markus

  9. #9
    Join Date
    Aug 2004
    Location
    France
    Posts
    754
    Hello Markus123,

    I couldn't quite accept that MySQL Connectors were so bad for querying data, but it seems so...

    Anyway, you still have the possibility of fetching one row at a time. You said you tried to set Prefetch to 1, in the MySQL doc I pointed to you last time, they say one must use the constant Integer.MIN_VALUE. Here is the text on ResultSets for the Connector/J :

    By default, ResultSets are completely retrieved and stored in memory. In most cases this is the most efficient way to operate, and due to the design of the MySQL network protocol is easier to implement. If you are working with ResultSets that have a large number of rows or large values, and can not allocate heap space in your JVM for the memory required, you can tell the driver to 'stream' the results back one row at-a-time.

    To enable this functionality, you need to create a Statement instance in the following manner:

    stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWA RD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY);
    stmt.setFetchSize(Integer.MIN_VALUE);

    The combination of a forward-only, read-only result set, with a fetch size of Integer.MIN_VALUE serves as a signal to the driver to "stream" result sets row-by-row. After this any result sets created with the statement will be retrieved row-by-row.

    There are some caveats with this approach. You will have to read all of the rows in the result set (or close it) before you can issue any other queries on the connection, or an exception will be thrown. Also, any tables referenced by the query that created the streaming result will be locked until all of the results have been read or the connection closed.
    Let me know if that works. It's not an ideal solution anyway : fetching rows 1 by 1 is typically slow if you have many results.

    BTW,
    ...The combination of a forward-only, read-only result set, with a fetch size of Integer.MIN_VALUE serves as a signal...
    Why did they not think of something more complicated ? I hate such simple manipulations.

    Hope that helps...

    Regards,

    RBARAER

  10. #10
    Join Date
    Nov 2004
    Posts
    14
    Hi RBARAER,

    you are right. Connector/J supports the fetch size. Setting Integer.MIN_VALUE as fetch size enables it.

    Switching to ODBC still doesn't work. There seems to be no flag to set the fetch size....

    Greetings,
    Markus

    PS: In my Delphi example you configure the database access via BDE. Setting the "ROWSET SIZE" to 20 doesn't help. Why has the Connector/J more performance possibilities than ODBC?
    Last edited by Markus123; 12-01-04 at 08:31.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •