Results 1 to 9 of 9
  1. #1
    Join Date
    Oct 2003
    Location
    Philippines
    Posts
    8

    Unanswered: Help on datarow locking

    I have a problem regarding sybase row level locking, i run a delete script

    delete
    from shp_pkg_stat
    where scan_datetime between '01/01/2000 00:00' and '08/01/2000 00:00'


    but the script locks all other transactions that access shp_pkg_stat table .

    shp_pkg_stat is in datarows locking scheme and scan_datetime column is a non-clustered index

    i have also run set showplan on and set noexec on to verify if the script is using the index

    Im using ASE12.0 running on Solaris 8 Server

    My question is why is it the sql script is locking the table shp_pkg_stat, anyone please help ..

  2. #2
    Join Date
    Oct 2003
    Location
    Philippines
    Posts
    8
    by the way the transactions that was locked by my script is not using the same datarow that i was accessing . I would appreaciate all the help i can get and thank you for reading my message...

    Peace !!!!

  3. #3
    Join Date
    Jan 2003
    Posts
    62

    Re: Help on datarow locking

    You may be experiencing a table lock sitation.
    How many rows will there be for that condition? And how many rows are there in the entire table? From the manual:
    A table lock is more efficient than multiple page or row locks when an entire table might eventually be needed. At first, a task acquires page or row locks, then attempts to escalate to a table lock when a scan session acquires more page or row locks than the value set by the lock promotion threshold.
    May I suggest you read more on 'lock promotion' and configuring this.

    Hope this help.

    Originally posted by Cyberdude
    I have a problem regarding sybase row level locking, i run a delete script

    delete
    from shp_pkg_stat
    where scan_datetime between '01/01/2000 00:00' and '08/01/2000 00:00'


    but the script locks all other transactions that access shp_pkg_stat table .

    shp_pkg_stat is in datarows locking scheme and scan_datetime column is a non-clustered index

    i have also run set showplan on and set noexec on to verify if the script is using the index

    Im using ASE12.0 running on Solaris 8 Server

    My question is why is it the sql script is locking the table shp_pkg_stat, anyone please help ..

  4. #4
    Join Date
    Oct 2003
    Location
    Philippines
    Posts
    8

    Smile

    you mean what i have experience is just normal to Sybase if my SQL Script exceeds the data access for the table locks to be activated, if that is so can i configure the max number of data before a table goes into table locks ?


    Have a nice day and thanks for your reply, i appreciate it very much..

  5. #5
    Join Date
    Jan 2003
    Posts
    62

    Re: Help on datarow locking

    You may be experiencing a table lock sitation.
    How many rows will there be for that condition? And how many rows are there in the entire table? From the manual:
    A table lock is more efficient than multiple page or row locks when an entire table might eventually be needed. At first, a task acquires page or row locks, then attempts to escalate to a table lock when a scan session acquires more page or row locks than the value set by the lock promotion threshold.
    May I suggest you read more on 'lock promotion' and configuring this.

    Hope this help.

    Originally posted by Cyberdude
    I have a problem regarding sybase row level locking, i run a delete script

    delete
    from shp_pkg_stat
    where scan_datetime between '01/01/2000 00:00' and '08/01/2000 00:00'


    but the script locks all other transactions that access shp_pkg_stat table .

    shp_pkg_stat is in datarows locking scheme and scan_datetime column is a non-clustered index

    i have also run set showplan on and set noexec on to verify if the script is using the index

    Im using ASE12.0 running on Solaris 8 Server

    My question is why is it the sql script is locking the table shp_pkg_stat, anyone please help ..

  6. #6
    Join Date
    Oct 2003
    Posts
    18
    Originally posted by Cyberdude
    you mean what i have experience is just normal to Sybase if my SQL Script exceeds the data access for the table locks to be activated, if that is so can i configure the max number of data before a table goes into table locks ?


    Have a nice day and thanks for your reply, i appreciate it very much..
    Correct, this is normal Sybase behaviour.

    Look into the sp_configure options "row lock promotion LWM" and "row lock promotion HWM". These two controls when Sybase promotes a number of row locks to a table lock (there are similar options for page-locked tables).

    That said, an easier option would be for you to split your delete into smaller segments, perhaps a months (or two weeks) worth of data in a single batch:

    delete
    from shp_pkg_stat
    where scan_datetime between '01/01/2000' and '02/01/2000'

    Deleting larger chunks of data isn't faster than deleting many smaller chunks.

  7. #7
    Join Date
    Oct 2003
    Location
    Philippines
    Posts
    8
    Thanks for all your help. i've already narrowed my delete script into 1 month but it is still locking, by the way the table is using 8 partitions
    and the entire table is 16,043,135 rows of data.

    Can an index be partitioned ?

  8. #8
    Join Date
    Oct 2003
    Posts
    18
    Try with setting rowcount to a value before you do your delete:

    set rowcount 10000

    delete from shp_pkg_stat
    where scan_datetime between '01/01/2000' and '02/01/2000'

    while (@@rowcount = 10000)
    begin
    delete from shp_pkg_stat
    where scan_datetime between '01/01/2000' and '02/01/2000'
    end

    Setting rowcount places a limit on how many rows Sybase will affect in a single operation. Reading from @@rowcount always returns the actual number of rows affected. Say you have 25000 rows to delete with the above statement. The first delete will affect only 10000 rows, so Sybase will enter the loop. First loop will delete the next 10000 rows. Next loop will delete only 5000 rows, so the while condition will return false, causing the loop to exit.

    /Hacker

  9. #9
    Join Date
    Oct 2003
    Location
    Philippines
    Posts
    8
    Thanks for your help, i've already finished deleting the data, i just created a powerbuilder program deleting the data 1 day at a time, i leave it running before the holiday (nov 1) and its ok now.

    Thanks Hacker and tezza_chen for the added info its been a great help for me.


    Have a nice day and God bless ...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •