Results 1 to 3 of 3
  1. #1
    Join Date
    Jan 2005

    Unanswered: Large tables, fast full text searches


    I'm build a search engine which contains a table of files found on our local network. Currently the file table contains over 1.9 million tuples. An fulltext index is created using tsearch2. Currently only 1/10th of out network is in the database, so the final version will contain over 19 million tuples. But searching is quite slow. The machine which hosts the postgresql is an Athlon 1400, with 512mb memory. It just serves as postgresql server.
    The problem is that searching, with a 50 result limit, takes over 3 seconds. Is there any way to speed up things, like keeping the index inside the computers memory? Or is this the maximum performance I can expect from my hardware?

  2. #2
    Join Date
    Feb 2005
    Quote Originally Posted by blubber
    I'm build a search engine...
    fulltext index is created using tsearch2...
    Is there any way to speed up things?
    I have not done this yet myself, but am in planning stages for a similar project (full-text search engine using postgresql). Try a google search for "kernel tweak postgresql" or "postgresql.conf tune shared_buffers". Lots of sites will come up, for example:

    In other words, it helps to tweak/tune both postgresql and the kernel. (Hopefully you're not running postgresql on Windows.)

    Also, it would be a good idea to increase the memory, since memory is so cheap. 1GB is my minimum these days. And use a SCSI HD if not doing so already.

  3. #3
    Join Date
    Feb 2005
    Almost forgot... if you haven't already, you'll want to make sure the data is all indexed nicely and queries are being planned efficiently... run VACUUM and ANALYZE... I'm sure others on the forum may have more detailed advice, I'm a newbie, but VACUUM and ANALYZE should improve performance...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts