Results 1 to 10 of 10
  1. #1
    Join Date
    Nov 2010
    Posts
    6

    Unanswered: asynchronous commit per transaction

    I am using Postgresql 8.4.(4?) through the pyscopg module for a python-based application.

    I'm looking to improve performance, esp related to a lot of small writes.

    I'm not intimately familiar with Postgres tuning, so I'm going for simple and easy to understand. Asynchronous commits sound like a good idea, because I could lose the last couple of write operations and still be okay in the event of a failure. Some of the literature seems to indicate that hundreds of operations could become thousands. Sounds worth trying.

    BUT the same server does have multiple databases on it. One of the databases could go asynchronous and one absolutely must not. The postgres manual indicates that the server configurations (like asynchronous commit) could be modified "per function" so I'm looking for an example of this.

    I'm going nuts trying to figure out how to execute writes into only one of the databases asynchronously. There don't seem to be PRAGMAs or commands in the regular SQL stream, from what I can (can't) find. I'm running through an adapter (psycopg) so I don't necessarily have the opportunity to do very direct calls to the database.

    If it makes any difference, I'm running my python app in "auto-commit" mode because I have a lot of fairly simple SQL calls that are okay to interleave between clients. I only have a few large & heavy SQL commands, but even so those are atomic unto themselves and don't need to be part of a larger transaction.

    Any suggestions?
    Thanks, M

  2. #2
    Join Date
    Nov 2003
    Posts
    2,935
    Provided Answers: 12
    Quote Originally Posted by drone115b View Post
    The postgres manual indicates that the server configurations (like asynchronous commit) could be modified "per function" so I'm looking for an example of this.
    You can change that setting on a per-connection basis using the following SQL statement:
    Code:
    set synchronous_commit = on

    If it makes any difference, I'm running my python app in "auto-commit" mode
    That is a very bad idea. By committing every row, you put an unnecessary high load on the server. If you can batch several updates/inserts into a single transaction, that will improve performance.

    Do have a look at this PostgreSQL Wiki page: Performance Optimization - PostgreSQL Wiki

    You might also need to review your settings for wal_buffers and checkpoint_segments.

  3. #3
    Join Date
    Nov 2010
    Posts
    6

    RE: asynchronous commit per transaction

    Thanks! I appreciate the block of SQL. Will try that out real soon.

    RE: creating transactions to chunk together many small writes -
    unfortunately, each client only has a little bit of work to do, and there are many clients. So it's not immediately clear to me how to chunk together many updates into one transaction most of the time. (There is one situation where it would work.)

    One of my other problems is that, if a rollback is necessary, it's not immediately apparent how much work I need to redo to make sure my write is still relevant, so I am hoping to stick with auto-commit. I think it's 95% the nature of the application I'm writing and perhaps 5% of me not having very much extra time on my own part.

    That said, postgres is holding up under the load, I'm just looking to get more out of the hardware.

    Thanks, M

  4. #4
    Join Date
    Nov 2003
    Posts
    2,935
    Provided Answers: 12
    Quote Originally Posted by drone115b View Post
    One of my other problems is that, if a rollback is necessary, it's not immediately apparent how much work I need to redo to make sure my write is still relevant, so I am hoping to stick with auto-commit. I think it's 95% the nature of the application I'm writing and perhaps 5% of me not having very much extra time on my own part.
    If you are running with auto-commit you cannot rollback anyway (at least no using a database rollback)

  5. #5
    Join Date
    Nov 2010
    Posts
    6

    RE: asynchronous commit per transaction

    I should have been more clear. If I were to chunk up my updates (99%) and inserts (1%) then I could find myself needing to rollback some transactions.

    For example, many of my updates are conditional. Under auto-commit (correct me if I'm wrong) I don't really have a problem, if the condition fails, the update falls through without any issue. But if I have two longer transactions running concurrently, then each may see different conditions, and perform operations that may contend, and so one of them will need to roll back.

    I should also thank you for the optimization page reference. I think I have seen this the last time I took an optimization pass, but I will read it again and refresh my memory.

  6. #6
    Join Date
    Nov 2003
    Posts
    2,935
    Provided Answers: 12
    Quote Originally Posted by drone115b View Post
    I should have been more clear. If I were to chunk up my updates (99%) and inserts (1%) then I could find myself needing to rollback some transactions.

    For example, many of my updates are conditional. Under auto-commit (correct me if I'm wrong) I don't really have a problem, if the condition fails, the update falls through without any issue.
    You can wrap each update/insert with a savepoint, so you can safely rollback the single statement. But given your way of processing auto-commit will achieve basically the same.

  7. #7
    Join Date
    Nov 2010
    Posts
    6
    Tried putting that line into my SQL stream, at the beginning of a connection before any other SQL commands have executed, and it's causing the Postgres server to crash right and left.

  8. #8
    Join Date
    Nov 2003
    Posts
    2,935
    Provided Answers: 12
    Quote Originally Posted by drone115b View Post
    Tried putting that line into my SQL stream
    What exactly is "your SQL stream"?
    Can you show us the full SQL statements that you execute?

  9. #9
    Join Date
    Nov 2010
    Posts
    6
    Spent a while trying to restore the database from the pqAdmin interface. Vaccum and other operations did not seem to help. I had to drop the database and recreate from scratch.

    Replaced:
    set synchronous_commit = off
    with:
    SET LOCAL synchronous_commit TO OFF

    so, after I connect to the database, I issue the SET LOCAL call, then follow it with a write, such as a conditional UPDATE.

    Seems to be holding up now. Not sure what hosed the database before. Difficult to know whether performance has really improved without more testing.

  10. #10
    Join Date
    Nov 2010
    Posts
    6
    Okay, well, during testing an abnormal termination of a client corrupts the database. Not every time, but perhaps every eighth time or so, under a medium load on the server. By abnormal termination, I mean sending the client process a kill and letting it die no matter where it is in the process of writing to the database.

    Did not have this problem before messing with asynchronous commits so I'm backing out of that change.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •