Results 1 to 9 of 9
  1. #1
    Join Date
    Oct 2007
    Posts
    5

    Unanswered: Splitting a large table into multiple ones

    Hi,
    Since we dont have license for the enterprise edition that supports partitioning we are planning to do it in our framework. We have some really big tables (200 million records) - for multiple customers. We are planning to break it into multiple tables for each customer. Will this help in improving read/write operations on this table.

    Thanks for your help

  2. #2
    Join Date
    Aug 2003
    Location
    Where the Surf Meets the Turf @Del Mar, CA
    Posts
    7,776
    Provided Answers: 1
    >Will this help in improving read/write operations on this table.
    What do YOUR benchmark test results show you??????????????

    IMO, it will NOT change write performance.
    With regard to read performance, it could make it better, worse or unchanged.
    It depends upon how the data is organized & what the actual SQL queries are.
    Don't you think that if there was a 100% always correct answer that you'd find such a recommendation in Oracle Tuning Guide?

    If the results violate Third Normal Form, you'll like end up causing more problems than problems that are solved.

    There is NO Silver Bullet when it comes to database tuning.
    Last edited by anacedent; 10-22-07 at 23:11.
    You can lead some folks to knowledge, but you can not make them think.
    The average person thinks he's above average!
    For most folks, they don't know, what they don't know.
    Good judgement comes from experience. Experience comes from bad judgement.

  3. #3
    Join Date
    Aug 2004
    Location
    France
    Posts
    754
    AFAIK, a proper partitioning can really improve read performance because Oracle can choose which partitions to read and, in addition, scan partitions in PARALLEL, which is really useful if you have multiple CPUs.

    However, without the Oracle partitioning option, the kind of pseudo-partitioning you are planning to do won't help at all IMO. It could help for mass delete improvements for example (with TRUNCATE or DROP TABLE of pseudo-partitions instead of deletes), but you won't get any noticeable improvement on SELECTs, UPDATEs or INSERTs just by doing so. As anacedent says, de-normalizing your schema will likely put you in a lot of trouble, and IMO it is really not worth the price.

    If you want performance, tune your SQL statements : use bind variables (or know exactly WHY you don't use them in a specific query), try different ways to write a query and different indexes. Be sure to test on a realistic data sample and have statistics up-to-date : when you create a new index for example, gather statistics on it before testing your query, or the optimizer won't even notice the index !

    If you have problems on tuning a specific query, don't hesitate to ask for help. Provided you give enough information (the SQL query, tables, indexes and the execution plan), there are people here that will be able to help you !

    Regards,

    rbaraer
    ORA-000TK : No bind variable detected... Shared Pool Alert code 5 - Nuclear query ready .

  4. #4
    Join Date
    Oct 2007
    Posts
    5
    We are not denormalizing the tables. I have a table which has tenant_id, first_name, last_name, ...about 50 columns. This table has millions of rows - our queries typically are only for a given tenant (based on logged in user). This is a classic multi tenant solution. However since this table is become huge and we do searches on email , last name (note: we have indexes on tenant_id,UPPER(EMAIL), tenant_id,UPPER(LASTNAME)). Now the idea is to break the table so that each customer/tenant has its own table customer_data_5 for e.g. (it will have the same exact data columns) but a smaller set of data only for that customer with id 5. Our queries will go against this table now.. My guess was since the tables are smaller now, my reads/inserts should be faster. However my reads are not showing much improvement which was a surprise to me. So any guidance here will help.

  5. #5
    Join Date
    Jun 2004
    Location
    Liverpool, NY USA
    Posts
    2,509
    Give an example of the query you are using. For example if you used

    lastname = 'SMITH'

    The index would not be used, when you use a FBI, the where clause must be exact

    UPPER(lastname) = 'SMITH'

    Would use the index. (assuming that statistics have been gathered)
    Last edited by beilstwh; 10-23-07 at 17:07.
    Bill
    You do not need a parachute to skydive. You only need a parachute to skydive twice.

  6. #6
    Join Date
    Oct 2007
    Posts
    5
    Yes our queries are like you mentioned.

    WHERE upper(lastname) like '%SMITH%' (we do wildcard)..and i see that the indexes are begin used when i look at the query plan. One thing i have noticed is that my cost after splitting the table has become 1/5th of the original cost, however i am seeing noticeable improvement in one of the tables. I have partitioned 2 different tables - in 1 even though cost is lower time taken by query is still the same. not sure why ?

  7. #7
    Join Date
    Nov 2003
    Posts
    2,932
    Provided Answers: 12
    Quote Originally Posted by ng75
    WHERE upper(lastname) like '%SMITH%'
    This will never use an index because of the leading % sign.
    Only a LIKE with (only) a trailing % sign can make use of an index

  8. #8
    Join Date
    Aug 2003
    Location
    Where the Surf Meets the Turf @Del Mar, CA
    Posts
    7,776
    Provided Answers: 1
    >>>What do YOUR benchmark test results show you??????????????

    Why did you not respond with quantifiable results showing improvements based upon your plans?

    If you have the answers, why the question.
    If your are just shooting in the dark, then fire away & accept the consequences.
    You can lead some folks to knowledge, but you can not make them think.
    The average person thinks he's above average!
    For most folks, they don't know, what they don't know.
    Good judgement comes from experience. Experience comes from bad judgement.

  9. #9
    Join Date
    Aug 2004
    Location
    France
    Posts
    754
    Quote Originally Posted by ng75
    Yes our queries are like you mentioned.

    WHERE upper(lastname) like '%SMITH%' (we do wildcard)..and i see that the indexes are begin used when i look at the query plan. One thing i have noticed is that my cost after splitting the table has become 1/5th of the original cost, however i am seeing noticeable improvement in one of the tables. I have partitioned 2 different tables - in 1 even though cost is lower time taken by query is still the same. not sure why ?
    The cost given by the optimizer is relative, it means nothing in absolute : if 2 plans for a given query have different costs then you can compare them, but for different queries (or even the "same" query on different objects), they cannot be compared.

    You say that the optimizer uses indexes with a double-wildcard LIKE '%SMITH%' ? Do you have Oracle Text indexes with prefixes and suffixes indexed ?

    Would you mind posting your query, tables, indexes and execution plan ? Then we might be of real help...

    Regards,

    rbaraer
    ORA-000TK : No bind variable detected... Shared Pool Alert code 5 - Nuclear query ready .

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •