Results 1 to 6 of 6
  1. #1
    Join Date
    Oct 2009
    Posts
    2

    Question Unanswered: Achieving concurrent run of a process !!

    Achieving concurrent run of a process !!

    Please provide you suggestions to achieve concurrent run of a process (each run for one group)



    It working fine for restricted number of records.



    1. For 3 -4 lakhs of record the job fails due to deadlock scenario with data pages lock schema.

    2. Getting lack of locks for data rows lock schema.

    3. Had CLCL_ID and GRGR_ID as clustered index in tables and with data pages lock schema, we were able to run the process in concurrent mode but it didn’t work all times.



    None of the approach was success. Last solution now we have is to have each query run in loops, which involves total restructure of the code, more effort.

    Awaiting for your reply.


    __________________________________________________ _______________


    if there are no deadlocks, sp_sysmon does not display any detail information, it only prints the “Total Deadlocks” row with zero values.

    To pinpoint where deadlocks occur, use one or both of the following methods:

    Use sp_object_stats. See “Identifying tables where concurrency is a problem” on page 88 in Performance and Tuning: Locking for more information.
    Enable printing of detailed deadlock information to the log.
    See “Printing deadlock information to the error log” on page 85 Performance and Tuning: Monitoring and Analyzing for Performance .



    Deadlock can occur in Sybase Adaptive Server Enterprise (ASE) when two user processes each have a lock on a separate page or table and each wants to acquire a lock on the same page or table held by the other process. The transaction with the least accumulated CPU time is killed and all of its work is rolled back.

    There are ways to avoid deadlocks. These are general good practices:

    Write transactions to acquire locks in the same order on the system. This will help to reduce both lock contention and deadlocks.
    Write only those statements which must be committed or rolled back as a group within a transaction. This will help reduce lock contention and deadlocks.
    Avoid hot spots. Hot spots are those points in the data or index where many queries access. If too many queries are accessing the same pages for Selects and DML operations, then lock contention and/or frequent deadlocks could result. For example, use a clustered index on the table to avoid heap.
    Avoid heap tables (i.e. tables with no clustered index). They may cause many issues, including deadlocks, because of the hot spot. That is a "hot spot" on the last page of the table leading to lock contention and frequent deadlocks.
    Applications that are not impacted by dirty reads (isolation level 0), may have better concurrency and reduced deadlocks when accessing the same data by setting transaction isolation level 0 at the beginning of each session. An example is an application that finds the momentary average balance for all savings accounts stored in a table. Since it requires only a snapshot of the current average balance, which probably changes frequently in an active table, the application should query the table using isolation level 0.
    Those tables commonly involved in deadlocks are candidates for row-level locking (datarows or datapages).
    Use stored procedures, as opposed to dynamic SQL, where feasible. There are many advantages to using stored procedures including: minimize blocks and deadlocks. it is a lot easier to handle a deadlock if the entire transaction is performed in one database request, also locks will be held for a shorter time, improving concurrency and potentially reducing the number of deadlocks. Furthermore, it is easier to ensure that all tables are accessed in a consistent order if code is stored centrally, rather than dispersed amongst a number of applications.
    On OLTP systems, avoid creating a clustered index on a sequential key on a transaction busy table. This would create a hot spot on the last page of the table. In addition, despite row-level locking, this can still cause frequent deadlocks and blocks.
    Avoid select alter #table, create #tables with constraints and defaults.

  2. #2
    Join Date
    Jun 2007
    Location
    London
    Posts
    2,527
    You need to say what your process does and preferably give some info on the tables. Include any indexes. Say how many concurrent processes and how many records in each relevant table.

    Do you know what a dead lock is?

  3. #3
    Join Date
    Oct 2009
    Posts
    2
    For the same process, we are trying to run it concurrently. For each run, we have 2-3 Lakh records and we are trying to have such runs concurretly.

  4. #4
    Join Date
    Jun 2007
    Location
    London
    Posts
    2,527
    Quote Originally Posted by JohnNagarjun
    For the same process, we are trying to run it concurrently. For each run, we have 2-3 Lakh records and we are trying to have such runs concurretly.
    Thanks for your in depth description of the problem

    I'd just insert a return statement near line 4 of your process.
    This should improve the performance dramatically and reduce all locking.
    I was going to suggest deleting the data as well but then figured you just might.

  5. #5
    Join Date
    Oct 2009
    Posts
    1

    Question

    Quote Originally Posted by mike_bike_kite
    Thanks for your in depth description of the problem

    I'd just insert a return statement near line 4 of your process.
    This should improve the performance dramatically and reduce all locking.
    I was going to suggest deleting the data as well but then figured you just might.
    Hi,

    We have a requirement to execute a Sybase stored procedure concurrently with 2 or more instances based on the need. Each instance will have different input parameters. This stored procedure has multiple inserts and updates to a single table. And each instance of the SP will populate and update records greater than 50000. Some even upto 300000!
    We made the following configuration changes in Database
    1) Changed the row and page lock promotion threshold HWM to 15000
    2) Increased the no of locks in the adaptive server to 200000
    Approach 1:
    With no code change, ran the process in concurrent mode with data pages lock schema which resulted in deadlock scenario
    Approach 2:
    Modified the table lock schema to data rows and ran the process in concurrent mode which resulted in lack of locks
    Approach 3:
    Created clustered index in tables and with data pages lock schema, we were able to run the process in concurrent mode but it didn’t work all times.
    Approach 4:
    Made the following changes in code
    1) Placed each insert and update query within a transaction and added logic to commit each query transaction after every 10000 records were processed. Basically each query is executed in a batch of 10000
    2) Table was created with DATAROWS lock schema
    Is this a good approach to achieve concurrency? We tested this approach above and we did not run into deadlocks or exhaust locks in the server but the total run time was increased. We have added a condition in all insert queries to check that record that was inserted in the previous batch loop is not inserted again. so we wonder if that condition increases the run time. not sure though.

    Providing solutions/ suggestions to achieve optimal and consistent concurrent run with optimal run tme as soon as possible will be of great help!

  6. #6
    Join Date
    Jun 2007
    Location
    London
    Posts
    2,527
    Quote Originally Posted by SriKrish
    Hi,

    We have a requirement to execute a Sybase stored procedure concurrently with 2 or more instances based on the need. Each instance will have different input parameters. This stored procedure has multiple inserts and updates to a single table. And each instance of the SP will populate and update records greater than 50000. Some even upto 300000!
    Normally concurrent processes are small processes that update a limited number of records. If you're updating 50000 records each time then this suggests your approach needs changing. Can you explain what the processes do and why they need to update so many rows each time. I suspect you're going to have to queue up these processes and run them one after the other.

    Will the processes often be running concurrently? would it matter if they were queued?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •