Results 1 to 14 of 14
  1. #1
    Join Date
    Sep 2003
    Posts
    8

    Question Unanswered: Table Locks in Oracle

    Here's the situation.

    We will be running incremental updates to the data mart hosted on the oracle database. When this incremental is running, we don't want the users to be able to access the data mart for reporting. In other words, prevent any execution of select statements against the tables in a given schema.

    As an application group, we do not have previleges to lock the database. But we have previleges to lock the tables and possibly (am not sure) the schema as well.

    Given the above, could anyone please suggest how best this could be implemented. How would one prevent select access to the tables?

    Any help would be appreciated.

    FYI, we are running Oracle 9i.

    Thanks,
    WiseDB (Guess am not so wise after all - )

  2. #2
    Join Date
    Jul 2003
    Posts
    2,296
    - you could alter the select privs of the users.

    - you could also rename the tables as you change them, then change the name back after you are done.
    - The_Duck
    you can lead someone to something but they will never learn anything ...

  3. #3
    Join Date
    Sep 2003
    Posts
    8
    Originally posted by The_Duck
    - you could alter the select privs of the users.

    - you could also rename the tables as you change them, then change the name back after you are done.

    Thanks for the response.

    The user level security is with the reporting front end. Back end access is using an application account. Since we have only one application account shared for ETL and reporting, altering user previlges is not an option.

    As for the renaming tables option, this would mean rewriting all the ETL jobs (We are using a tool based approach for the ETL). Also, as I understand with my limited knowledge, rename table statements are usually costly (though very handy). Doing this for a huge number of tables and then twice (once before starting the incremental and then renaming them back after the incremental), would not be acceptable given the current timing issues we would end up with for the incremental.

    Thanks anyway! Keep them coming!!

    WiseDB

  4. #4
    Join Date
    Nov 2002
    Location
    Desk, slightly south of keyboard
    Posts
    697
    Hi,

    I think if you read up on the 'detail' of Oracle locking, serialisation, read consistency etc, I think you will find that you can change the update job so as to ensure that anyone querying the database always get's a consistent view until such time as the update completes.

    Oracle's default locking and consistency behaviour (while good) is not always adequate, I can't pretend to give you the right answer but there are many transaction modes you could experiment with which may alleviate your concerns.

    Hth
    Bill

  5. #5
    Join Date
    Sep 2003
    Posts
    8
    Hi Bill,

    Thanks for your response.

    Actually the problem is slightly more than read consistancy.

    The problem is more at a schema level wherein the entire set of tables together make an integral set of data interpreted as info.

    It so happens that the updates to these tables happen in a sequence, more like updating one table at a time. Now when a table is being updated, rather AFTER one is updated and the second is BEING updated, the integral view gets distorted with temporal variances between the data. In other words, one table has current data but the other, since it's being updated is read as "yesterday's " data causing read consistancy issues.

    The goal is to ensure consistant select access across an entire set of tables and not just the table that's being updated.

    I need to work out a solution that
    * either prevents select access altogether to the tables involved
    * OR, all the tables, regardles of whether they (one or many) are being modified, should be read as "yesterday's " tables

    Some teaser this one is!!

    Thanks,
    WiseDB


    Originally posted by billm
    Hi,

    I think if you read up on the 'detail' of Oracle locking, serialisation, read consistency etc, I think you will find that you can change the update job so as to ensure that anyone querying the database always get's a consistent view until such time as the update completes.

    Oracle's default locking and consistency behaviour (while good) is not always adequate, I can't pretend to give you the right answer but there are many transaction modes you could experiment with which may alleviate your concerns.

    Hth
    Bill

  6. #6
    Join Date
    Nov 2002
    Location
    Desk, slightly south of keyboard
    Posts
    697
    Hi,

    How is big is the update job? I mean could it all be ran as one transaction? Surely as long all of the updates are not commited until the entire update is complete, you shouldn't have read consistency problems. None of the updates would be visible to any users until the entire transaction is commited.

    There are of course other situations which can fall over on this, such as drill down reporting, long user jobs etc. Imagine a user is sitting at a report (generated prior to the update), the update is executed, the user then clicks on a report element to drill down, and voila, the update has changed or deleted the data they were looking at.

    Oracle does offer transaction level read consistency as opposed to statement level read consistency (SET TRANSACTION ISOLATION LEVEL SERIALIZABLE) which would alleviate the above problem. It very much depends on whether this can be implemented in your client software.

    Hth
    Bill

  7. #7
    Join Date
    Sep 2003
    Posts
    27
    Bill was right, Oracle can do this, you just need to change it's read-consistency mode.

    The default read-consistency is at statement level.
    However, you can set the read-consistency to transaction-level so that you can perform all your updates to various tables and the user won't see any changes to any table until you commit.

    The statement you need is "SET TRANSACTION ISOLATION LEVEL SERIALIZABLE";

    Good luck.

  8. #8
    Join Date
    Sep 2003
    Posts
    27
    Kind of beat me to it there Bill.......

  9. #9
    Join Date
    Mar 2002
    Location
    Ireland
    Posts
    181
    Hi,

    At the begining you said:

    "In other words, prevent any execution of select statements against the tables in a given schema."

    Would locking the schema for the duration be an option?

    alter user user_name account lock;
    then:
    alter user user_name account unlock;

    Rgs,
    Breen.

  10. #10
    Join Date
    Sep 2003
    Posts
    8
    Bingo!! Bill...You got the problem alright! NOW, we are speaking the same language!

    But, no all the updates don't run as one transaction. More so because we are using a tool based approach for the ETL. Each set of updates are canned into an ETL Job (Mapping in the tool). And then the entire incremental consists of many such jobs (Mappings).

    The read inconsistancy during the drill down/thru reporting is exactly what I want to avoid during the incremental. And the best way to get this done is to block front end access!! But How? I mean how at the database level?

    Thanks,
    - WiseDB

    Originally posted by billm
    Hi,

    How is big is the update job? I mean could it all be ran as one transaction? Surely as long all of the updates are not commited until the entire update is complete, you shouldn't have read consistency problems. None of the updates would be visible to any users until the entire transaction is commited.

    There are of course other situations which can fall over on this, such as drill down reporting, long user jobs etc. Imagine a user is sitting at a report (generated prior to the update), the update is executed, the user then clicks on a report element to drill down, and voila, the update has changed or deleted the data they were looking at.

    Oracle does offer transaction level read consistency as opposed to statement level read consistency (SET TRANSACTION ISOLATION LEVEL SERIALIZABLE) which would alleviate the above problem. It very much depends on whether this can be implemented in your client software.

    Hth
    Bill

  11. #11
    Join Date
    Sep 2003
    Posts
    8
    Hi there,

    Thanks for posting!

    What you and Bill are suggesting will work perfectly for multi table updates in the same transaction. But my ETL jobs span multiple transactions.

    One thought that I had was may be bring down the reporting portal while the incrementals are running. But I wish to avoid this as far as possible as this brings in it's own dev & automation challenges.

    Was looking more for an error that could be thrown up at the backend when someone runs a select which would then escalate thru the front end finally to the user indicating what's happening at the backend! The error message doesn't have to communicate exactly that the incrementals are running, but could do with something like resource busy/locked and so on...

    This sure is keeping me scratching!! Thanks for your inputs.

    WiseDB




    Originally posted by tdrevans
    Bill was right, Oracle can do this, you just need to change it's read-consistency mode.

    The default read-consistency is at statement level.
    However, you can set the read-consistency to transaction-level so that you can perform all your updates to various tables and the user won't see any changes to any table until you commit.

    The statement you need is "SET TRANSACTION ISOLATION LEVEL SERIALIZABLE";

    Good luck.

  12. #12
    Join Date
    Nov 2002
    Location
    Desk, slightly south of keyboard
    Posts
    697
    Hi,

    Do you want to stop all access during the update, or just be sure that the users are aware that an update occurred and consistency could be a problem?

    If you want to stop them, then Breen's suggestion coupled with an 'alter system kill session' for all/selected sessions connected to your schema prior to the job start would help. Life would be so much easier if everything made a few calls to DBMS_APPLICATION_INFO prior to starting

    if you just want them to be aware, then there are a number of options...

    Have your update job recompile a package which has package vars in. Next time your user accesses the package vars they'll get a 'package state changed' error. Trap this and report it.

    Other option, have your update job write into a status table 'Working' and 'Finished' when it's done. Ensure this is always displayed to the users client application (either through a status bar etc).

    You could also send DBMS_ALERT on job start/job end, this would alert the users to potential consistency issues. i use DBMS alert heavily for client side progress bars on server side jobs, they work well.

    If you truly want to ensure consistency across these commited transactions without affecting the user then you will have to consider adding historisation to the tables. It's a fair amount of work to set up but is a beauty when working.

    I was once involved in a banking environment where the client would do this....

    pck_historisation.SetBusinessDay( '23/09/2003')

    select * from transactions, select * from home_address, select * from work_address etc.

    pck_historisation.SetBusinessDay( '23/09/2001')

    select * from transactions, select * from home_address, select * from work_address etc.

    The two would return completely different result sets. They're complicated to set up and require strong discipline from your programmers, but they're an absolute beauty when working. You also need to really understand and grasp SQL tuning to play with this.

    The bank went from (poorly coded) stuff that required business jobs were run everu day (start of business, close of business, tax calculations, authority notifications) etc to an environment where they could queue this up for days until they were ready to go with the next 'business day'.

    A slightly poorer version of this historisation would be to have your client apps querying snapshots (materialised views). You could run all your updates on the base tables with DBMS_REFRESH's disabled, and switch them on when you're ready.

    Hth
    Bill

  13. #13
    Join Date
    Sep 2003
    Posts
    8
    Bill,

    Thanks for the ideas. Will work on those further!

    Meanwhile, some points below:

    Originally posted by billm
    Hi,

    Do you want to stop all access during the update, or just be sure that the users are aware that an update occurred and consistency could be a problem?

    Yes, This would be preferred...

    If you want to stop them, then Breen's suggestion coupled with an 'alter system kill session' for all/selected sessions connected to your schema prior to the job start would help. Life would be so much easier if everything made a few calls to DBMS_APPLICATION_INFO prior to starting

    [/SIZE][/QUOTE] The problem is we are having only one application account(schema) shared by the backed as well as front end. There's no way we could lock it. And I guess this closes most of the account/schema related avenues and leaves us with only process/transaction related ones

    if you just want them to be aware, then there are a number of options...

    Have your update job recompile a package which has package vars in. Next time your user accesses the package vars they'll get a 'package state changed' error. Trap this and report it.

    Other option, have your update job write into a status table 'Working' and 'Finished' when it's done. Ensure this is always displayed to the users client application (either through a status bar etc).

    [/SIZE][/QUOTE] One of the thoughts I had. But this needs dev on front end as well. And since front end is a product as well, this depends on the flexibility of the tool to allow such customization. But seems doable

    You could also send DBMS_ALERT on job start/job end, this would alert the users to potential consistency issues. i use DBMS alert heavily for client side progress bars on server side jobs, they work well.

    If you truly want to ensure consistency across these commited transactions without affecting the user then you will have to consider adding historisation to the tables. It's a fair amount of work to set up but is a beauty when working.

    [/SIZE][/QUOTE] Time variance is the essence of the data mart. And then also the capability to report in multiple time contexts at the same time. Those trend reports, Period to date reports, This month last year, this quarter last year reports and so on.. This is more of a BI solution requiring the capability to slice the data across dimensions (time being one of them). On an operational and business transactional level, "historizations" are a wonder and wouldn't agree more with you!


    I was once involved in a banking environment where the client would do this....

    pck_historisation.SetBusinessDay( '23/09/2003')

    select * from transactions, select * from home_address, select * from work_address etc.

    pck_historisation.SetBusinessDay( '23/09/2001')

    select * from transactions, select * from home_address, select * from work_address etc.

    The two would return completely different result sets. They're complicated to set up and require strong discipline from your programmers, but they're an absolute beauty when working. You also need to really understand and grasp SQL tuning to play with this.

    The bank went from (poorly coded) stuff that required business jobs were run everu day (start of business, close of business, tax calculations, authority notifications) etc to an environment where they could queue this up for days until they were ready to go with the next 'business day'.

    [/SIZE][/QUOTE] "Revolutions" that brains (and using them - ) can bring out!!

    A slightly poorer version of this historisation would be to have your client apps querying snapshots (materialised views). You could run all your updates on the base tables with DBMS_REFRESH's disabled, and switch them on when you're ready.

    [/SIZE][/QUOTE] Nah...Way too much happening at the backend for a simple (seemingly!!) requirement.

    It seems this would be best handled at the front end with the logons denied to the portals based on some status flags and any report refreshes checking for the status flags if allowed by the tool!!



    Hth
    Bill

  14. #14
    Join Date
    Aug 2002
    Location
    SHANGHAI,CHINA
    Posts
    20
    using lock statment

    examples:
    session 1:
    SQL> insert into a values('LLL',4);

    1 row created.

    session 2:
    SQL> lock table a in row exclusive mode;

    Table(s) Locked.
    SQL> select * from a for update;

    it will waiting for session 1 committed or rollbacked

    session 1:
    SQL> commit;

    Commit complete.

    session 2:
    C1 C2
    ---------- ----------
    FFF 2
    MMM 3
    XXX 1

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •