Results 1 to 5 of 5
  1. #1
    Join Date
    Jun 2003

    Unhappy Unanswered: postgres performance changes drastically

    I'm managing a heavily accessed database in postgres. We are running on a dual Xenon 2.8 Ghz machine with 2GB of ram.

    Every so often, it is like someone flips a switch and the site comes to a crawl. After a few hours then as quick as it slowed down, it speeds back up.

    For example the following query takes between 8-15 seconds during the slow periods, but only .01-.03 seconds during the fast periods.

    explain analyse select * from url_list where (((now() + interval '5 days')>paidthru)) ORDER BY lasthit LIMIT 1;

    The table has abour 3500 rows in it.

    I've tried full vaccums, reindexing tables, restarting postgres, but nothing seems to work, but all of a sudden it will speed up and work fine for a day or so.

    Any ideas?

  2. #2
    Join Date
    Sep 2001

    shared memory

    maybe some shared memory problems

    did you changed the settings in postgresql.conf.
    with 2 GB of RAM you must change the settings

    this is a good performance booster
    --Postgresql is the only kind of thing--

  3. #3
    Join Date
    Jun 2003

    Re: shared memory

    Here are the values I've set in postgres.conf.

    tcpip_socket = true
    max_connections = 768
    superuser_reserved_connections = 2
    shared_buffers = 12288 # min max_connections*2 or 16, 8KB each
    sort_mem = 2048 # min 64, size in KB
    checkpoint_segments = 3 # in logfile segments, min 1, 16MB each
    checkpoint_timeout = 300 # range 30-3600, in seconds
    stats_command_string = true
    stats_row_level = true
    stats_block_level = true
    LC_MESSAGES = 'en_US.iso885915'
    LC_MONETARY = 'en_US.iso885915'
    LC_NUMERIC = 'en_US.iso885915'
    LC_TIME = 'en_US.iso885915'

    Originally posted by eperich
    maybe some shared memory problems

    did you changed the settings in postgresql.conf.
    with 2 GB of RAM you must change the settings

    this is a good performance booster

  4. #4
    Join Date
    Apr 2003


    May be You could set the postmater process to get High priority and more ram.
    Open up
    Take a look to my Blog

  5. #5
    Join Date
    May 2002
    Florida, USA
    This sounds like an odd problem. Have you done any tests on your system itself to make sure the hardware doesn't have memory problems, etc...? (see, or maybe you have a disk access problem.

    For a table of only 3500 rows, it is very unlikely that the problem has to do with indexing, or perfomance settings, though. I suspect the error is either with the server, or with the client environment. What type of application are you running, and what is the client connection method? Are you storing any large objects in this table? (blobs, etc...)

    Other than this, I can suggest a couple more things:

    What operating system are you running? If it is Linux, you might want to check your kernel limits for shared memory (SHMALL and SHMMAX). Often the defaults for this are rather low, and on a machine like yours, they should probably be set somewhere between 128MB and 512 MB.

    Also, your postgresql.conf sort_mem settings seem low. If your shared memory settings mentioned above are high enough, you should probably triple or quadruple the sort_mem settings.

    Have you checked the PostgreSQL logfiles? Have you tried running in higher debug mode, to see what the logfiles show?

    Finally, I notice that you are using time interval comparisons, with "(((now() + interval '5 days')>paidthru))". Have you indexed all the 'paidthru' column, as well as the 'lasthit' column? Were you aware that with PostgreSQL, you can also index a function? You might want to make a function (stored procedure) that performs this comparison, and create an index on that function.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts