Results 1 to 6 of 6
  1. #1
    Join Date
    Sep 2002
    Location
    Toronto, Canada
    Posts
    76

    Unanswered: Increasing the buffer size

    I increased the sharedbuffer value for my postgres instalation and I didn't notice any performance improvement.

    I'm I the only one who don't see the improvements? I changed it to shared_buffers value to 4096, sort_mem=4092, effective_cache_size = 2000 and max_connection = 35

  2. #2
    Join Date
    Dec 2002
    Posts
    65
    Well I think it's going to depend on the size of your databases and what you're selecting. If everything your doing alright fit into the shared buffers before (or if you didn't increase your shared buffers by enough) then you might not see much of a change.

    In my personal experience, I have noticed large performance differences when tweaking according to tips the stuff I've found at phpbuilder and Bruce Momjian's tweaks over at techdocs.postgresql.org, even for relatively small databases.

    Also, if changes around your hardware aren't working, there's an article on techdocs about tweaking your queries.

    The most reliable answer would probably be to join the PERFORM postgres mailing list and post a question there.

    HTH
    -b
    (I'm only available at the email address provided in my profile on weekdays, if you have questions or advice, during off hours use AIM). Also any views I provide here or on my website are mine and not representative of any views of my work, family, friends and sometimes even myself.

    http://www.bcyde.com

  3. #3
    Join Date
    Apr 2003
    Location
    Tunisia
    Posts
    192

    Arrow It depends

    Does this depends of the sustem ?? yes I think I've noticed that precompiled version of Postgresql embeded with RedHat 9 got more felexiblity in mnaging memory.

  4. #4
    Join Date
    Sep 2002
    Location
    Toronto, Canada
    Posts
    76
    This size of the database I'm working on is 350 MB. On the same system, I have 5 other databases. The total size of each PG_DATA/base is 1.7 Gigs, but none of the other databases are being used right now. (there were all created for testing).
    Last edited by unicef2k; 05-08-03 at 12:50.

  5. #5
    Join Date
    Dec 2002
    Posts
    65
    Well here's a blurb from the postgres mailing list that might interest you:


    "> That seems odd... shouldn't pgsql be able to cache information better
    > since it would be cached in whatever format is best for it, rather than
    > the raw page format (or maybe that is the best format). There's also the
    > issue of having to go through more layers of software if you're relying
    > on the OS caching. All the tuning info I've seen for every other
    > database I've worked with specifically recommends giving the database as
    > much memory as you possibly can, the theory being that it will do a much
    > better job of caching than the OS will.

    There are a number of reasons why that's a dubious policy for PG (I
    won't take a position on whether these apply to other databases...)

    One is that because we sit on top of the OS' filesystem, we can't
    (portably) prevent the OS from caching blocks. So it's quite easy to
    get into a situation where the same data is cached twice, once in PG
    buffers and once in kernel disk cache. That's clearly a waste of RAM
    however you slice it, and it's worst when you set the PG shared buffer
    size to be about half of available RAM. You can minimize the
    duplication by skewing the allocation one way or the other: either set
    PG's allocation relatively small, relying heavily on the OS to do the
    caching; or make PG's allocation most of RAM and hope to squeeze out
    the OS' cache. There are partisans for both approaches on this list.
    I lean towards the first policy because I think that starving the kernel
    for RAM is a bad idea. (Especially if you run on Linux, where this
    policy tempts the kernel to start kill -9'ing random processes ...)

    Another reason is that PG uses a simplistic fixed-number-of-buffers
    internal cache, and therefore it can't adapt on-the-fly to varying
    memory pressure, whereas the kernel can and will give up disk cache
    space to make room when it's needed for processes. Since PG isn't
    even aware of the total memory pressure on the system as a whole,
    it couldn't do as good a job of trading off cache vs process workspace
    as the kernel can do, even if we had a variable-size cache scheme.

    A third reason is that on many (most?) Unixen, SysV shared memory is
    subject to swapping, and the bigger you make the shared_buffer arena,
    the more likely it gets that some of the arena will be touched seldom
    enough to make it a candidate for swapping. A disk buffer that gets
    swapped to disk is worse than useless (if it's dirty, the swapping
    is downright counterproductive, since an extra read and write cycle
    will be needed before the data can make it to its rightful place).

    PG is *not* any smarter about the usage patterns of its disk buffers
    than the kernel is; it uses a simple LRU algorithm that is surely no
    brighter than what the kernel uses. (We have looked at smarter buffer
    recycling rules, but failed to see any performance improvement.) So the
    notion that PG can do a better job of cache management than the kernel
    is really illusory. About the only advantage you gain from having data
    directly in PG buffers rather than kernel buffers is saving the CPU
    effort needed to move data across the userspace boundary --- which is
    not zero, but it's sure a lot less than the time spent for actual I/O.

    So my take on it is that you want shared_buffers fairly small, and let
    the kernel do the bulk of the heavy lifting for disk cache. That's what
    it does for a living, so let it do what it does best. You only want
    shared_buffers big enough so you don't spend too many CPU cycles shoving
    data back and forth between PG buffers and kernel disk cache. The
    default shared_buffers setting of 64 is surely too small :-(, but my
    feeling is that values in the low thousands are enough to get past the
    knee of that curve in most cases."

    If you need a specific answer, try posting your specs and info to the list and ask what they recommend.
    -b
    (I'm only available at the email address provided in my profile on weekdays, if you have questions or advice, during off hours use AIM). Also any views I provide here or on my website are mine and not representative of any views of my work, family, friends and sometimes even myself.

    http://www.bcyde.com

  6. #6
    Join Date
    Apr 2003
    Location
    Tunisia
    Posts
    192

    Thumbs up Cool, but ...

    Your speech was clear and sufficient and gathers exactly the problem that I'm getting todays : Pgsql Eats more and more memory, but how to do , are there a solution a tweaking , or a patch ???

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •