Results 1 to 2 of 2
  1. #1
    Join Date
    Nov 2002
    Posts
    21

    Quantifying DB execution time

    Hi guys...

    I am implementing an web application which needs to query a database everytime the user logs in order to retrieve the required page content.. i.e. the web page content is stored in a string format in a database...

    From what i heard it appears that connecting to the database and executing the required query is time consuming especially in web environment... can this time be quantified?? I tried to print the time before the query is executed and the time after the query results are returned but the time retrieved was the same...

    Can i actually calculate the time JDBC take to execute a query.. since i need to make some sort of statistics regarding performance i.e. execute a db query vs fetching the results from a cache??


    regards


    sandro

  2. #2
    Join Date
    Oct 2002
    Location
    Baghdad, Iraq
    Posts
    697

    Re: Quantifying DB execution time

    Originally posted by Sandro Psaila
    From what i heard it appears that connecting to the database and executing the required query is time consuming especially in web environment... can this time be quantified?? I tried to print the time before the query is executed and the time after the query results are returned but the time retrieved was the same...

    Can i actually calculate the time JDBC take to execute a query.. since i need to make some sort of statistics regarding performance i.e. execute a db query vs fetching the results from a cache??
    Java uses TCP for most inter-process communication. The current popular implementation of TCP, in turn, has a "slow-start" feature that makes each new connection expensive. Essentially, your first few packets are sent through a small window that gets bigger until TCP figures out the optimal window size. (That's an issue even for loop-back connections. Note this may not apply for JDBC level 3 drivers that use shared memory for communication.)

    In addition, any protocol has to do some amount of handshaking before it can get down to business, at the bare minimum the server needs to say, "I'm running version such and such" and the client has to say, "okay, that works for me." If you've got a middleman like an ODBC bridge, there's even more handshaking. So that's the short story on why each new connection is expensive.

    The way around this is to keep a pool of connections. That means you keep used connections open and recycle them, which means the TCP stream is held at an optimal state and the DBMS doesn't have to shake hands with the client each time. The tradeoff is that the client is going to hold those resources hostage, but for a busy website this isn't an issue at all.

    If pooling works, that would explain why you see no measurable penalty for connecting to a database. You'll have to dig through the documentation for that JDBC driver to figure out how to turn it off if you want accurate results. The best I can say is that it will be called "connection pooling".

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •