Page 1 of 2 12 LastLast
Results 1 to 15 of 22
  1. #1
    Join Date
    Jan 2004
    Posts
    8

    Unanswered: Performance Issue on a table contatining large number of records, with out indexes

    I have an application uses a oracle user which consists of 56 tables . I have a table which has 1.5 million records in it and when i do any fetch or make some calculations on these columns to generate the report it takes lot of time and my application throws operation timed out. It was understood that it is happening because of large number of records , this was confirmed by a test that the report generation happens quickly when there are less number of records and as the number of records goes on increasing my application performance in generating the report is going down.

    In these cases we came up with indexing the columns and this greatly improved the performnace. But this on the other side, effected the performance in the other flows of application(which the percentage of using these modules by the end users is high than the percentage of generating the report module) having any updations to these records in the table. So we ruled out using indexing. So what needs to be done in these situations.

    We need to improve the performance with out having any indexes on the table.

    Can we have the same replica of the table data in an another table where we use indexes on this newly created table and the application uses this table only for report generations. While the real entity is used for any changes or updations. But in this case whatever the updations have been done on the real table those should be relflected in the replicated table simultaneously . Can this be achieved and any solutions on this is greatly appreciated. Any other alternatives to this solution is alos greatly appreciated.

    Regards
    prathinesh

  2. #2
    Join Date
    Jan 2004
    Location
    Hyderabad, India
    Posts
    37
    Hi ,

    If you are going to query the table using the same query ( in other words, for one report), then probably you can create a materialised view on that with the option REFRESH ON COMMIT which will automatically refreshes the view whenever there is a dml on the base table...

    If you are going to generate lot of reports, then this solution will not work as you need to create materialsed view for each report...
    Regards
    Suneel

  3. #3
    Join Date
    Jan 2004
    Location
    India
    Posts
    62
    If you are using oracle 8 or above then isn't is possible for you to create the particular table with partitions ?
    According to your requirements, you can use the partitions in the report as well as in updates also.
    Regards,

    Rushi

  4. #4
    Join Date
    Jan 2004
    Posts
    8
    Hi Rushi,

    Will Partioning the table solve the problem. I am not very sure about partitioning and how it functions. But to my knowledge if we partition the table to multiple tables , my question is on which table i need to query on, since my query for report requires the whole data in the table.

    If you have any additional info on this please provide me with.

  5. #5
    Join Date
    Jan 2004
    Posts
    8
    Originally posted by suneel.kumar
    Hi ,

    If you are going to query the table using the same query ( in other words, for one report), then probably you can create a materialised view on that with the option REFRESH ON COMMIT which will automatically refreshes the view whenever there is a dml on the base table...

    If you are going to generate lot of reports, then this solution will not work as you need to create materialsed view for each report...
    Hi Suneel

    Tahnx for ur suggestion. But looking at your solution probably i need to know what a materialized view is and how it functions. Can u help me on this. Anyow Considering this solution i have the same query being fired for the same report. But my applications serveral other reports being generated based on the same entity . So probably as ur suggestion goes it might not fit if i was correct. Clarify me.

    Prathinesh

  6. #6
    Join Date
    Sep 2002
    Location
    UK
    Posts
    5,171
    Provided Answers: 1

    Re: Performance Issue on a table contatining large number of records, with out indexes

    Of course, adding an index to any table has a negative impact on the performance of inserts, updates and deletes on that table - but not typically a very significant impact. What sort of difference are you seeing that makes using an index unacceptable?

    Any other solution that meets your requirement that "whatever the updations have been done on the real table those should be relflected in the replicated table simultaneously" will also slow down those updates, of course! Only by separating the act of replication from the change to the table could this not be so. But that would mean that for some time (seconds, minutes, hours depending on how you do it) the 2 tables would be out of sync.

    I suspect the correct answer here will be to add the index(es) and accept that DML will be impacted a little.

  7. #7
    Join Date
    Mar 2002
    Location
    Reading, UK
    Posts
    1,137
    I would agree with Tony's reply indexing shouldnt add much overhead with a couple of provisos.

    1) Keep the number of indexes to a minimum, Dont forget that indexing on columns (a,b,c) should speed up queries on a, a & b, a & b & c and a & b & c & d. Consider the compress option as this might reduce I/O. You dont need to index every criteria used in your queries just those which are most commonly used (as in the 80:20 rule).

    2) ensure your indexes arent in the same tablespace as the data and are on different disks/stripes so you dont get the disk head going back and forth between indexes and tables.

    Alan

  8. #8
    Join Date
    Jan 2004
    Posts
    8
    Hi

    Adding the indexes is a best solution even i agree that . But taking it clear the percentage of time the users will be generating reports and the percentage of time that the users will be modifying or updating the records. If the proportion is like 90%:10% ( Report generation percentage : edit percentage) then its acceptable but here it is reverse (i.e) 10%:90% ( Report generation percentage : edit percentage).So this is not acceptable and i experienced this.

    Any other alternates.....

    Prathines

  9. #9
    Join Date
    Sep 2002
    Location
    UK
    Posts
    5,171
    Provided Answers: 1
    Those percentages alone don't justify not having indexes. Consider:

    90% edits, each edit taking 1 second without index, 2 seconds with index
    10% reports, each report taking 10 minutes (600 seconds) without index, 10 seconds with index

    Without index the average "cost" in time is 90%*1 + 10%*600 = 60.9 seconds

    With index it is 90%*2 + 10% * 10 = 2.8 seconds

    i.e. the gain is much more than the pain.

    Of course these figures are just examples, but the principle applies.

    What is the difference in performance of edits with/without indexes in your case?

  10. #10
    Join Date
    Sep 2002
    Location
    UK
    Posts
    5,171
    Provided Answers: 1
    Another possible flaw in your reasoning is where you say 90% of the users' time is spent performing updates, if from that you deduce that a doubling of the time taken by insert statements etc. results in a doubling of the time spent by users on that task. You need to consider all the other things a user has to do. Even if the user does nothing all day apart from insert records into one table, the process of doing so probably consists of steps like this (imaginary timings added):
    1) Press button to start insert (1 second)
    2) Type in data, possibly using pick lists etc. to populate fields (5 seconds)
    3) Press "Save" button (1 second)
    4) DB performs insert (0.1 seconds)
    5) User receives "record saved" confirmation (0.5 seconds)

    That's around 7.6 seconds per insert using my hypothetical figures. if you double the time the DBMS takes to do the insert (step 4), that becomes 7.7 seconds, i.e. just over 1% longer.

    Let's say the user typically enters 1000 records per day.
    Without the index: 7600 seconds (just under 2 hours 7 minutes)
    With the index: 7700 seconds (just over 2 hours 8 minutes)

    The user "loses" 100 seconds per day with the index.

    If they run a report just once and it runs in 1 minute instead of 10 minutes, they are still better off with the index!

  11. #11
    Join Date
    Jan 2004
    Posts
    8
    The principle seems to be good. But there is a little clarification considering the number of users accessing . Lets say 100 users are accessing the application and trying to edit of update simultaneously. In this case lets say that for each user it takes 2 seconds to update. Now imagine what is the time taken for the 100 th user he gets his turn to update only after 100*2 sec =200 sec. This is absolutely not acceptable in these situations.

    Prathinesh

  12. #12
    Join Date
    Sep 2002
    Location
    UK
    Posts
    5,171
    Provided Answers: 1
    You mean that although this table has 1.5 million records, all 100 users will want to edit the same record at the same time? Really?

  13. #13
    Join Date
    Jan 2004
    Posts
    8
    Its exactly not editing a record but its inserting a new record in the table. In these cases there is a possibility that each user tries to insert 20 - 40 records.

    Prathinesh

  14. #14
    Join Date
    Sep 2002
    Location
    UK
    Posts
    5,171
    Provided Answers: 1
    Originally posted by prathinesh
    Its exactly not editing a record but its inserting a new record in the table. In these cases there is a possibility that each user tries to insert 20 - 40 records.

    Prathinesh
    A DBMS like Oracle is multi-user. Users don't have to wait to take turns at inserting records, they can all insert records at the same time. Of course this puts a load on the system, but it isn't as simple as saying that if it takes 2 seconds for 1 user to insert 1 record then it will take 200 seconds for 100 users to each insert 1 record. Maybe it will take 2.5 seconds, maybe longer - but not that long!

  15. #15
    Join Date
    Sep 2002
    Location
    UK
    Posts
    5,171
    Provided Answers: 1
    Originally posted by andrewst
    A DBMS like Oracle is multi-user. Users don't have to wait to take turns at inserting records, they can all insert records at the same time. Of course this puts a load on the system, but it isn't as simple as saying that if it takes 2 seconds for 1 user to insert 1 record then it will take 200 seconds for 100 users to each insert 1 record. Maybe it will take 2.5 seconds, maybe longer - but not that long!
    It's hard to demonstrate 100 users inserting data at once, but here's a single-user experiment that shows that elapsed time taken to insert 100 records is nowhere near 100 times the elapsed time taken to insert 1:

    Code:
    SQL> create table t1 (id int primary key, name varchar2(100) );
    
    Table created.
    
    SQL> set timing on
    
    SQL> insert into t1 values (1,'Name1');
    
    1 row created.
    
     real: 60
    
    SQL> insert into t1 values (2,'Name2');
    
    1 row created.
    
     real: 90
    Those 2 inserts took (by the "wall clock") 60ms and 90ms respectively (0.06 and 0.09 seconds). The variation comes from various "noise" factors like how busy the database is, how busy the network is etc. It includes not just the raw time taken to insert the data, but also the time taken to send the request from the client (SQL Plus) to the server, and to receive the acknowledgement.

    Now let's insert 100 records using some PL/SQL:

    Code:
    SQL> begin
      2    for i in 1..100 loop
      3      insert into t1 values (100+i, 'Namexxx');
      4    end loop;
      5  end;
      6  /
    
    PL/SQL procedure successfully completed.
    
     real: 201
    201ms for 100 records, or .0201 ms/record.

    So inserting 100 records took less than 4 times as long as inserting one record!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •