Results 1 to 6 of 6
  1. #1
    Join Date
    Nov 2003

    Unanswered: Performance improvement in a query with one-to-many relationships


    Consider this:

    I have a table A which contains project name and project id
    I have a table B which contains project id and team members

    There’s a stored procedure that has to return all the projects with their team members. It goes like this –

    Select A.project_name, function_member (A.project_id) from table A.

    function_member is a function which selects all the team members(opens a cursor) from table B and concatenates them with ‘,’ as delimiter.

    The output of the procedure is something like:

    ProjectName1 owner1,owner2
    ProjectName2 owner2,owner3,owner4
    This is displayed in a web page.

    Now the problem is that in the real case, there are lots of tables like table B which have a one-to-many relationship with table A. Also the query is much more complex than the example given above.
    This makes the query really slow.

    Please let me know if you have any ideas to get rid of functions like function_member from the query and make the query faster.

    I have considered denormalization of the data model which definitely increases performance by leaps and bounds , but that is like the last option we are trying to look at.


  2. #2
    Join Date
    Jul 2003
    Have you looked at what indexes (if any) those functions/queries are using? That would be my first step.
    - The_Duck
    you can lead someone to something but they will never learn anything ...

  3. #3
    Join Date
    Aug 2003
    Where the Surf Meets the Turf @Del Mar, CA
    Provided Answers: 1
    Whenever you have a SQL or PL/SQL "performance" issue, always
    and the use TKPROF to see where the time is being spent and to create an EXPLAIN PLAN for each SQL statement being executed.
    I suspect that you are doing something close to a FTS of the Member Table for each row in the Project table.
    The problem is that the current implementation does not scale well.
    By using PL/SQL and an explicit cursor loop, you should be able to make a single FTS thru the member table to get the desired results.

  4. #4
    Join Date
    Nov 2002
    Desk, slightly south of keyboard
    I would suggest that the only scalable solution is to replace the function call - maybe with a materialised view (performance hit per update or commit), maybe an extra column on the parent table with a trigger to write the contents.

    Basically, the workload to call the function can't change per row. It's all about whether an extra Millisecond per row update is acceptable or not. If doing big bulk inserts then maybe the trigger/MV insn't the answer. If it's purely OLTP then this woud probably solve the problem.

    Like Einstein said (or would have if he'd ever seen Oracle) "WTF is that!".


  5. #5
    Join Date
    Nov 2003
    Hi ,
    Thank you all for the replies.
    Actually I have tried putting indexes where apropriate and this has improved performance considerably. However it is still slow compared to a denormalised table. I would like to believe that there might be some way to make the performance of the current normalised model almost as good as in case of denormalisation.
    I will try the materialised view option .

    Thanks again,

  6. #6
    Join Date
    Sep 2002
    Provided Answers: 1
    If you are on 9i this may be of interest:

    There is a facility to build your own aggregate functions, including (as described here) a function to concatenate together values in a column.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts