Results 1 to 6 of 6
  1. #1
    Join Date
    Apr 2009
    Posts
    6

    Unanswered: How to use application context

    Hi
    I heard about application context to use some some session specific features.
    I am not sure how to use it. recently I faced a situation like:
    I needed to update 10 million rows on a table.
    I have disabled the update trigger before start updating to improve the performance.But that still takes like 7 hours. So disabling trigger for 7 hours is not a good solution. someone told that I can use application context so that I can disable the trigger only for my session. So for other user session the trigger will have enabled.

    Does any one know how can I use appcontext in this case?
    Pls help.
    thanks

    Mahbub

  2. #2
    Join Date
    Mar 2009
    Location
    Sydney, Australia
    Posts
    258
    Application Context can be used for many things, including a bit of trickery. But their purpose and intent was for row-level Access Control. Typically a Login script (global or single login) is executed from this. Login scripts are unfortunately also called Login Triggers. They bear no relation or similarity to normal Triggers.

    AFAIK, you cannot disable a Trigger for a session only (that would break the purpose of Triggers). Besides, doing that is a table-level, not session-level command:
    alter table mytable disable trigger trigger_name

    Updating 10 million rows is not a problem. Break them into (eg) 16 updates based on PK range, and run 16 threads in parallel. Also, break that into reasonable transactions (batches), via set rowcount 1000; that will ensure you do not hang up other users and the transaction log; the entire process will be even faster.
    Regards
    Derek Asirvadem (Formerly DerekA)
    Information Architect / Senior Sybase DBA
    Copyright 2009 Software Gems Pty Ltd

    I answer questions from the Original Poster only. If you have a genuine question, as the moderators have requested, start a new thread.

    http://www.softwaregems.com.au

  3. #3
    Join Date
    Mar 2007
    Posts
    86

    Use group commits

    To get better performance use group commit logic to manage the log writes. The idea is to minimize the context switching and number of writes-to-log incurred in a range update/delete operation, by using a begin-commit sequence around each batch. No need to break up anything to 16 instances etc.

    Here's an example using a delete on a date .. I delete millions of rows in minutes, not hours, using this method. Customize the sql stmt for your use.

    declare @rows int,
    @err int,
    @total int,
    @date_expiry datetime

    select @rows = 1,
    @total = 0

    -- pop stack on table oldest date is removed
    select @date_expiry=(select min(status_date) from status_response_current)

    -- set batch size for commit
    set rowcount 5000 -- or whatever

    select "Purge Started : " +convert(varchar(20),getdate(),9)

    --start transaction
    begin tran
    while @rows > 0 -- loop until the delete returns @@rowcount=0
    begin

    -- age out the oldest date --
    delete STATUS_RESPONSE_CURRENT
    where STATUS_DATE=(select min(STATUS_DATE) from STATUS_RESPONSE_CURRENT)
    select @rows = @@rowcount, @err = @@error
    -- sybase returned system error on delete
    if @err > 0
    begin
    -- close any open transaction
    while @@trancount > 0
    begin
    rollback tran
    end
    return
    end
    select @total = @total + @rows
    commit tran -- commit the batch
    begin tran -- begin new transactiom

    end -- exit when @rows > 0

    -- just to be sure commit any open transaction prior to exit
    while @@trancount > 0
    begin
    commit tran
    end

  4. #4
    Join Date
    Mar 2009
    Location
    Sydney, Australia
    Posts
    258
    Quote Originally Posted by stuarta
    No need to break up anything to 16 instances etc.
    Stuarta has missed the point. He has detailed what I suggesting in "Also, break that into reasonable transactions (batches)". That's one improvement.

    A separate and quite different improvement, which can be implemented with batches or without, is to break the entire stream into (eg) 16 threads, and execute them in parallel. given that you have 16 CPUs. It is two different things you can do to improve speed; one or the other or both. You can have one thread executing batches of 5000 rows, or 16 threads executing batches of 5000 rows concurrently.

    Neither of those improvements (batches, parallel threads) have anything to do with Application Context, as per the recent functional provision by Sybase.
    Regards
    Derek Asirvadem (Formerly DerekA)
    Information Architect / Senior Sybase DBA
    Copyright 2009 Software Gems Pty Ltd

    I answer questions from the Original Poster only. If you have a genuine question, as the moderators have requested, start a new thread.

    http://www.softwaregems.com.au

  5. #5
    Join Date
    Mar 2007
    Posts
    86

    Multiple threads

    You're referring to an SOA architecture. Remember the network is a TDS protocol. Connection pooling still traverses the same TCP/TDS network path. The listener on the other end can be multiplexed, but not without a hardware cost. With a single listener, you still ahve a synchronous stream of TDS packets on a single network path sourced from multiple connections. We do this with jBoss and ICAN gateways, but have reduced the pool to a minimum number of thereads, as there is little performance througput increase, while CPU on the appserver JVM side is extreme at high pooling numbers.

    Paralell threads would help in a BCP with partitioned tables, for instance. It soudned to me that this was not the case, but that the perofrmance issue encountered was due to millions of rows updated without any user defined transaction (UDT). In that case log managment and control of context switching is paramount.

    Hope all the participants have helped.. I'm gone.

  6. #6
    Join Date
    Mar 2009
    Location
    Sydney, Australia
    Posts
    258
    I am referring to nothing of the sort, it has nothing to do with SOA/connection pooling, middle tier, or anything of the sort. Sybase quite happily handles (interspersed) TDS packets of thousands of connections with no negative effect.

    OP needs to update millions of rows, and is seeking the fastest method. I have identified two separate methods.

    If (eg) you have 16 CPUs, you can execute up to 16 threads (each addressing 1/16th of the key range) in parallel, concurrently. Period. Execute them via isql each with a separate connection directly to Sybase ASE, and forget about connection pooling limitations from some middle tier or SOA or any such thing. The 16 threads in parallel will execute in close to 1/16th the time of 1 thread addressing the whole key range.

    Now you may decide that (for eg 16 CPUs), you have a very busy server, and thus you execute 8 threads, which will execute in close to 1/8th the time of 1 thread addressing the whole key range.

    And BTW, parallel bcp works quite well for partitioned and unpartitioned tables.
    Regards
    Derek Asirvadem (Formerly DerekA)
    Information Architect / Senior Sybase DBA
    Copyright 2009 Software Gems Pty Ltd

    I answer questions from the Original Poster only. If you have a genuine question, as the moderators have requested, start a new thread.

    http://www.softwaregems.com.au

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •