Results 1 to 4 of 4
  1. #1
    Join Date
    Mar 2003

    Question Unanswered: Query on optimal index

    I have a table with expected records of about 200K...

    I have about 3-4 processes inseting and deleting data into it parallely (at isolation 1).. thse occasinally run in parallel and sometimes out of sync...

    There are also users requests on the data (at isolation 3...)..

    The processes involves deleting some rows , revaluating them and re-inserting...

    Can any one suggest the best possible index (clustered or non-clustered) and locking scheme (rows/pages)...

  2. #2
    Join Date
    Feb 2002
    Willy is on vacation
    200K worth of records is not really a lot.

    If the data is heavy OLTP stuff, then just stick to APL scheme. DOL is added pain to worry about the GC and REORG etc..

    indexing largely depends on your application. You need to check the type of queries, the SARG's, the predicates and determine which columns are most used the datatype of the column, if the column is unique by design and so forth. This would give you an idea on the index on the table.

  3. #3
    Join Date
    Mar 2003
    Though it is not a large amount of is causing some issues now...

    And Yes it is a heavy duty OLTP app...

    I have the table locking al APL currently..and was exploring other options..

    The batch processes tend to process (mostly update & at times inserts too) appox. 100-10K at each run...
    however this tends to hamper user requests from the front-end...

    I have done some base level monitoring on the same and the main reason seems to be record/process blocking and some last page contention...hence I started looking in the direction of altering the scheme or the type of indexes...

    I have tried to best to re-design the indexes and have currently about 5 indexes on the same.. 2 to suffice user requests and 3 for batch related processing...currently all of them are non-clustered indexes so the table is currently a heap table..

    I also have update statistics scheduled on a daily basis...

    I am trying some preventive maintainance before this tends to boils over...

    I could understand reorg.. but does GC relate to housekeeper task.. or have I missed the point...

  4. #4
    Join Date
    Aug 2004
    Heap + heavy OLTP = problems.

    Clustered index would help, but you need an index key that will "spread out" the action, best if the first index key does a lot of this spreading. I work with HR systems so usually something like EmployeeNumber does a good job of this. Of course, if the company were hiring 30,000 people a day then this might not work -- as usual, it all depends.

    For the batch stuff this is a scenerio where I have found that the old set processing adage falls down. Better to do lots of little transactions that don't hamper the users doing the heavy OLTP stuff rather than big batch transactions that disrupt performance. The "batch" will take longer but so long as your daily process doesn't take more than 24-hours, for example, then slowing it down in this way is almost always better overall.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts