Page 1 of 2 12 LastLast
Results 1 to 15 of 17
  1. #1
    Join Date
    Aug 2004
    Posts
    54

    Unanswered: Memory Bottleneck?

    I am trying to figure out if we have a memory bottlneck on our SQL Server. We have a large database (90 GB) on Standard Edition of SQL 2000. Currently we have 2 GB of Memory. To me this seems low already, but Standard edition can't handle more than 2 GB anyway. Our cache hit ratio is very low (40%). This is also telling me we need memory. The thing that gets me is we still have 147 MB of memory available. If Memory really was the problem wouldn't that 147 MB be used?

    Thanks much.

  2. #2
    Join Date
    Jan 2005
    Location
    TempDb
    Posts
    228
    What indicates that 147 MB are available? 40% is way too low to run in production. I get antsy at a cache hit ratio (on a hot system, meaning the app. has run it's course and what you see is what you'll get) that is below 98%.

    Memory really is your problem, confirmed by your cache hit ratio, and you can further confirm this by seeing that the OS is paging (task manager).

    You can't play in the major leagues with minor league hardware and software (Standard Edition).
    I love deadlines. I like the whooshing sound they make as they fly by. Douglas Adams

  3. #3
    Join Date
    Aug 2004
    Posts
    54
    Thanks for the reply.

    Performance Monitor is indicating 147 MB free. Memory is not paging either. That is what I don't understand. Does it have something to do with SQL Standards limitations?

    I have been trying to get them to upgrade to Enterprise, but due to budgets I have been denied. Hopefully it will go through some day.

    Thanks much.

  4. #4
    Join Date
    Jan 2005
    Location
    TempDb
    Posts
    228
    Sorry - my bad.

    Maybe SQL Server's memory is throttled. You can check the memory configuration on the SQL Server via sp_configure.

    Either way, 40% cache hit ratio is horrendous. That means it spends 60% doing i/o requests. Your disks must be spinning like tops.

    And, sorry to hear about the budgeting woes.
    I love deadlines. I like the whooshing sound they make as they fly by. Douglas Adams

  5. #5
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662
    Perfmon can be more specific, but just by looking at Task Manager you can see that your service is maxed out on memory. What is the number against the service now? Whatever it is, - add 256MB to account for memtoleave, - and that's your total. With 90GB size of your db you may also consider increasing memtoleave to 384MB, since this portion of memory is heavily used with requests exceeding 8K.
    "The data in a record depends on the Key to the record, the Whole Key, and
    nothing but the Key, so help me Codd."

  6. #6
    Join Date
    Aug 2004
    Posts
    54
    Yeah, those disks are spinning pretty good.

    The serivice is currently using 1,701,032 K. So we are pretty close to the max memory, but even with the 256 memtoleave, I would expect it to be higher.

    Thank you both for your help.

  7. #7
    Join Date
    Oct 2003
    Location
    Jordan
    Posts
    28
    Hi

    I think you need to try rebuilding the database (shrink) in order to re-organize the database pages and indexes.
    because 40% ration to me mean that you have a lot of deletions and updates so
    a lot of fragmentation.

    finally reconsider the database design again and try to use archving for old transactions.

    Regards,
    Firas arramli
    Systems Analyst

  8. #8
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    Running a database with 90 Gb of data on a machine with 2 Gb of RAM is an open invitation to performance problems. Your management has effectively guaranteed that you'll never have good response for complex queries, and maybe not even for simple ones. As long as managment realizes that they've effectively throttled the server so that it can't possibly provide good response no matter what you do, then you've done all that you really can do.

    You can certainly scale the data down by archiving what you no longer need, which will help. If you could archive 90% of the data or more, you could get back to where I think you'd have a fighting chance at decient performance with 2 Gb of RAM.

    You might also be able to consolidate and rebuild your indicies to help somewhat. I often find redundant indicies loading up RAM when a database has been used "ad-hoc" for a long time without decent dba management. One of the medical databases that I saw around the first of the year had multiple redundant indicies that actually hurt performance... I got rid of all of them, added a surrogate key, and built one new index that reduced the size of the database file by around sixty percent and reduced the run time for the critical queries to 22 percent of their old duration. This was an extreme case, but it does offer some hope for massive improvements without having to rebuild your entire schema or the applications that use it.

    Good luck. I've been in your shoes (particularly when dealing with government/non-profit entities), and it isn't comfortable. When I know easy ways to improve life for a lot of people, but can't get the budget to make things happen, it frustrates me.

    -PatP

  9. #9
    Join Date
    Aug 2004
    Posts
    54
    Thanks for your help.

    I actually rebuild the indicies on a regular basis so index fragmentation is not the problem. I also could not find any redundant indicies. I would love to archive some data but some people at this company insist they need at least 2 years of data in this thing and they want 5 years. Currently there is only about 1.75 years so this beast is just getting uglier by the day. The way I figure it, this database will be about 105 gb by the time we hit 2 years. How much memory would you recommend for a database that size?

  10. #10
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662
    It's actually not the size of the database, but rather the amount of memory needed to process 80% of your requests. A database can be 100-fold of the size of RAM and still satisfy the performance requirements, while it can be half the size of the RAM and need more help than any memory increase can offer.
    "The data in a record depends on the Key to the record, the Whole Key, and
    nothing but the Key, so help me Codd."

  11. #11
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    Quote Originally Posted by rdjabarov
    It's actually not the size of the database, but rather the amount of memory needed to process 80% of your requests. A database can be 100-fold of the size of RAM and still satisfy the performance requirements, while it can be half the size of the RAM and need more help than any memory increase can offer.
    Very true, but most databases behave in ways that allow good guesses at the hardware configuration.

    When it comes to more RAM in a database that is demonstratably RAM-bound now, more is better. I'd ask for as much as your configuration will support (either 8 or 32 Gb total), and only give ground very grudgingly.

    At least in the servers that I've had to "rescue", undersizing RAM has nearly always been the number one problem. Your milage may vary, but it is the first thing I suggest and the last that I compromise in in almost every case I remember.

    -PatP

  12. #12
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662
    That's called "band-aiding" the problem, not solving it. A well-optimized database will hardly point to memory as a bottleneck, unless everything else is just as "good" as the database itself
    "The data in a record depends on the Key to the record, the Whole Key, and
    nothing but the Key, so help me Codd."

  13. #13
    Join Date
    Jan 2005
    Location
    TempDb
    Posts
    228
    Is there any other kind of database than a well-optimized one? Oh, wait, let me fire up SQL EM...

    Seriously, feed SQL Server as much RAM as you can (but not more than an amount that would exceed caching your databases). So, to second Pat's point, don't let memory get in the way.

    Anyone remember the Saturday Night Live skit featuring Ed Asner and you can't put too much water in the nuclear reactor?

    Trying to solve a performance problem for which lack of memory may be a problem, and in this case it definitely is, is extremely difficult if not futile.

    Correction to my second post on this thread: SQL Server throttles memory. Thus, this is the reason you are not seeing paging at the OS level.
    Last edited by MaxA; 02-22-05 at 02:28.
    I love deadlines. I like the whooshing sound they make as they fly by. Douglas Adams

  14. #14
    Join Date
    Oct 2003
    Location
    Jordan
    Posts
    28
    Hi

    it seems that your application is a data monster. and the way it runnign now if it countinues as is again you will fall in the same problem.

    Data archiving can be reached in a smart way that dose'nt bother the user. the user can have an option on the form wich allow him to query online data and historical data.

    Another way to come around this to have other DB-SERVER(s) and use replication. all your queries can be done server (B) and online entries can be done on Server (A) remote users can query Server (c), in other words you may consider distributing data among x servers.
    Firas arramli
    Systems Analyst

  15. #15
    Join Date
    Aug 2004
    Posts
    54
    Thanks again for all of your help.

    I will try to convince management to budget for more memory.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •