Results 1 to 5 of 5
  1. #1
    Join Date
    Aug 2003
    Mexico City

    Unhappy Unanswered: Concurrency issues, tunning, and other suggestions.

    Ok, I have a few questions here for whomever can lend a hand:

    I use SQL Server 7.0 as a production server on a Compaq Proliant box with three 36GB drives, 3 GB of Ram and 4 pentium 4 processors, yet I'm having horrible concurrency problems. The largest table will hold a million or so records, sometimes it will be joined with other big tables for reporting, which really slow down the database to a crawl.

    Question 1. I ran the tunning wizard on a day's worth workfile that I got using the profiler, so far it suggests dumping some indexes but doesn't suggest creating any new ones. Finally it says that after the recomendations are placed on the server the improvement percentage will be 0... Thats right 0% improvement...

    Did I do something wrong ? Do I need to run the profiler for a week or so to get a larger workfile ? Does the indexes I have (all non clustered PK's) are enough for my current requirements ? can it be helped ?

    Quesiton 2. I figured so far that I can't have OLAP and OLTP on the database at once unless I change all the database design. Our database is heavily used during all week days and there aren't any real schedules for reporting, it all just happens at the same time. How can I improve performace ? Reindexing ? Building extra indexes ? redesigning or denormalizing the database ?

    Question 3. I thought about reindexing the tables to use 50% fillfactor. That would give me a balanced performance between frequent inserts/updates (OLTP requests) and similar performance for selects and reporting (OLAP request). Am I right here or have I got it all wrong ?

    Question 4. Are there any GOOD comprehensive books you could suggest for intermediate DBA's ? I've read about every piece of SQL Server performance tunning available on forums and the web, altho something has been helpful it still falls short, or I'm not applying it correctly, coz altho I keep the performance from worsening I can't seem to get it to improve.


  2. #2
    Join Date
    Feb 2002
    I think you should add a clustered index on every table to speed up selectīs (it would slow down insertīs but in your case that doesenīt seem to be a major problem).

    The server works fine under "normal" work, itīs just the reports that makes the server crawl?
    If thatīs the case you donīt have to remove any indexes, just make sure that the indexes are the right one for your application. Forget about the index tuning wizard, look at your sql-commands instead, Do you have the indexes to match the where clauses in your commands?

    Are your sql-commands optimized for parralel query execution? (books online and Execution plan in QA).

    Is the server using all 4 proc?

    Hopes some of it helps!
    - Jonte

  3. #3
    Join Date
    Aug 2003
    Mexico City


    Thank you Jonte!

    I'll try adding clustered indexes as I go, we had a bad situation last may where the database was slowed down to a crawl and all the systems were unusable, an outsourcing firm came here and took off all the clustered indexes and made them non clustered...

    It didn't help that much, but back then we had to try EVERYTHING! I'll put those back again, as it was proven that the issue was concurrency.

    The server works fine under "normal" workload, that is when I don't have the all users at once in the 14 systems that the database serves. But when they're all in together, specially users from accounting and expenses they really give everybody else a headache...

    Processes from accounting and expenses are very heavy and resource intensive, and also poorly designed. So they lock tables, and hold to those locks till they finish their transactions, which sometimes takes over 60 seconds!

    And when I say it slows down its coz everything starts failing, reports slow down, processes take forever, and user response times go to the roof, hence users start complaining.

    I thought Index wizard would help out, but it hasn't. And I knew that tunning had to be done by hand, but I wanted to give it a shot, after all they (Microsoft Training Experts) tell you that works wonders! Yeah right!

    I'll have to check the zillion SQL queries for index usage and execution plan... I just wanted a quicker way.

    And Yup, the server is using all four processors... and 2.5 GB of RAM.

    Thank you for the input.

  4. #4
    Join Date
    Jan 2003
    Nottinghamshire, UK
    Have you tried using the SQLProfiler to monitor realtime activity (don't forget the filtering) - I'd put as much effort into this as DB tuning @ this stage

    you mentioned 14 different Apps & it only takes one badly written one to throw a spanner in the works !

    U could try using the query governer to timeout long running Queries & see who screams first - lol.

    some Client app developers hit the DB's serially with nasty Client or even Server Side Cursors unnecesarily - does'nt take may of these to make your server sweat.

    Also monitor those Locks - if someones hitting the DB with a 60 second exclusive then no amount of Ram's gonna help U.

    Basically I'm saying it looks to me like badly written Client apps - unfortunately U may not be able to alter them - but U could at least identify which ones they are & what the actual most intensive processes they're running - then take a fresh look from there.

    As always - I could be wrong :-(

    "Everything should be made as simple as possible, but not simpler." - Albert Einstein
    "Everything should be made as complex as possible, so I look Cleverer." - Application Developer

  5. #5
    Join Date
    Jul 2003
    San Antonio, TX
    sqlprofiler is a very good start. You can select SQLProfilerTSQL_Duration template and filter out anything higher than 30 or 40 seconds to start with. This way you're not overwhelmed with "zillion" of queries.

    At the same time you can specify a TRC file to save the output to. Once you're done collecting data for whatever period of time you choose, - you can open the trace file and your duration will be in ascending order.

    Once you pick any offender, - paste it into QA and set IO stats on and execution plan on. In IO stats go after logical reads and scan counts of long tables, and in exec plan after table and index scans.

    Hope it'll help.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts