Results 1 to 5 of 5
  1. #1
    Join Date
    Sep 2002
    Location
    Toronto, Canada
    Posts
    76

    Unanswered: using a temp table

    I have a query of the form:

    select * from (
    select contact.id
    , contact.name
    , (select from detail inner join myDetail using (detail.id) ) as cost
    from contact
    inner join user using (user_id) and user_id = 1
    ) as OUTTABLE
    where cost > 30;


    The problem is that all the tables in the query are big, so it takes a long time to get results from OUTTABLE (around 20 seconds) . When I add the where clause at the end (cost > 30), the query becomes too long (around 200 seconds sometimes).

    To make the query run faster, I thought of creating a temp table to store OUTTABLE, and then just filter on the temp table using cost > 30. That would improve the speed of the query tremendously (total time was around 23 seconds) . The problem with that approach is that I'm writing an application, and more than one user might need to run the query at the same time. (Different user should get different results because they have different user ids, and other numbers in the query would be different). Also, I can't just create and drop temp tables in an enterprise application (we are not using ejbs in our applications either), it seems like bad style.

    Anyhow, how could I make this query faster so that the values of OUTTABLE is retrieved first, then the where clause is evaluated to filter?

    Thanks in advance.

  2. #2
    Join Date
    May 2002
    Location
    Florida, USA
    Posts
    49
    Without knowing the structure of your tables and the rationale for your query, it is impossible to give any good advice.

    However, often there are better ways to handle complex queries than to have nested subselects. Hugh Darwen's seminar mentions some ways around this. (This seminar is a diatribe about the failings of SQL, but in so doing, he shows some interesting possibilities with SQL)

    Also, PostgreSQL has many ways you can influence the query optimizer to better use indexes. (You do have indexes on your 'cost' and 'id' columns, don't you?).

    Another good option is to write a stored procedure as a set returning function in PL/PgSQL that handles this logic. Functions can be declared as cacheable, also, so that they are much more optimized than regular queries.

  3. #3
    Join Date
    Apr 2003
    Location
    MN
    Posts
    9
    I work with Oracle databases most of the time at work. 200 seconds for a query doesn't seem very long in my eyes. How much data are you going after? We have some tuned SQL that runs for a hour only because it's going after 50 tables with millions of records on each table. There queires are for reports though maybe you are using the SQL within an application?I suppose depending on what you need the results for 200 seconds could be long if the app is hanging while the user waits for this.

  4. #4
    Join Date
    Apr 2003
    Location
    MN
    Posts
    9
    Or is PG just that fast that a 200 second SQL statement is not normal

  5. #5
    Join Date
    Apr 2003
    Location
    Tunisia
    Posts
    192

    Cool

    Ya man u shoul just use View that gonna be stored in your databse and just updated once necessary that would be fastest = select (all) in the view will create a static table of the selected data ! good luck Man !

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •