Results 1 to 14 of 14
  1. #1
    Join Date
    Mar 2002
    Location
    Bielefeld, Germany
    Posts
    69

    Angry Unanswered: SQL7 performance prob

    I have an SQL 7 server on NT4 SP6, Pentium II500, HW Raid5, 1Gig Ram.
    In the db, there's one table A table with ~100,000 rows.
    SELECT col1 from A takes between 15 and 45 secs in Query Analyzer.
    SELECT * from A takes 15min+!
    So far, I can only compare that to my test system, where a SELECT col1 takes less than a second, SELECT * around 15secs.

    Test System is a P4 1.8, 768MB Ram, Single SCSI disc on Win2K, SQL2000, identical data.

    I'd expect that to be faster, but not THAT faster. I suspect the performance problems to be hardware-related, but itm, I have only remote access.

    I'd appreciate any hints regarding what to look for, what stupid things I may have done or forgotten or what I could do to track the source of the problem.

    TIA, chris

  2. #2
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    Have you compared execution plans on both systems?
    If it's not practically useful, then it's practically useless.

    blindman
    www.chess.com: "sqlblindman"
    www.LobsterShot.blogspot.com

  3. #3
    Join Date
    Feb 2002
    Posts
    2,232
    I would also compare the database sizes between the 2 systems. What types of indexes are you using (clustered ?) and do you have heavy fragmentation ? You can use dbcc showcontig to examine fragmentation. Are you using any non-standard data types like text/ntext/image ? How do you load the data to the test environment - do you truncate and bulk load all data or do you update ? How long have the raid-5 disks been in use - have you had any issues with these ?
    If you do not have SQL Server Books Online (BOL) installed - please do so. The majority of questions asked in the SQL Server forum could be eliminated if people had access to bol. BOL

  4. #4
    Join Date
    Sep 2003
    Posts
    522
    do you have remote access to both prod and test, or your test is locally?

  5. #5
    Join Date
    Mar 2002
    Location
    Bielefeld, Germany
    Posts
    69
    Ok, somehow I've the creepy feeling I'm tumbling over my own feet :/

    Some background:

    This is what dbcc showcontig returned (after I deleted and recreated a clustered index on production, see below)

    production site
    - Gescannte Seiten.............................: 2345
    - Gescannte Blöcke............................: 294
    - Blockwechsel.................................: 293
    - Seiten pro Block (Durchschnitt)......: 8.0
    - Scandichte [Bester Wert:Tatsächlicher Wert].......: 100.00% [294:294]
    - Logische Scanfragmentierung ..................: 0.00%
    - Blockscanfragmentierung ...................: 18.03%
    - Bytes pro freie Seite (Durchschnitt) .......: 90.8
    - Mittlere Seitendichte (voll).....................: 98.88%

    Test environment:
    - Gescannte Seiten.............................: 2546
    - Gescannte Blöcke............................: 322
    - Blockwechsel.................................: 321
    - Seiten pro Block (Durchschnitt)......: 7.9
    - Scandichte [Bester Wert:Tatsächlicher Wert].......: 99.07% [319:322]
    - Logische Scanfragmentierung ..................: 0.98%
    - Blockscanfragmentierung ...................: 29.81%
    - Byte frei pro Seite (Durchschnitt) .......: 724.4
    - Mittlere Seitendichte (voll).....................: 91.05%

    (sorry, it's the german version)
    My reading of the above is that production is better than testing?!

    Idea on table structure:

    id int(4), not null, identity(1,1), primary key
    catnumber nvarchar(12), not null
    version nvarchar(30), not null
    text nvarchar(255), not null
    tag char(10), null
    bunch of other fields, 2 of them ntext

    This is a catalog that is modified exactly once a year. Inbetween, it's practically read-only.

    Searches on this table are usually of the form
    SELECT catnumber, text, whatever FROM mytable WHERE (catnumber LIKE 'foo%' OR text like '%bar%') AND version = 'someversion' AND tag ='sometag' ORDER BY catnumber

    I've indices on id (surprise), catnumber, text, version and tag. Additonally, I have a combined clustered index (version, catnumber). This should garantuee that matches are physically near together. For testing, I removed indices completely and created one at a time, but results don't differ singnificantly?!?

    I do a simple SELECT catnumber FROM mytable
    and execution plans are the same on testing and production:

    SELECT <- mytable(index catnumber)

    but on production, I get 99% on the mytable index scan wheras I get a 0% on testing. Same goes for table scan when I do that without indices.

    For the hardware: the disks are about four years old, no issues so far. I checked the logs and there's nothing suspicious there.

    I really hope I'm missing something obvious here, and thanks for your help!

  6. #6
    Join Date
    Mar 2002
    Location
    Bielefeld, Germany
    Posts
    69
    Oh, and some infos I forgot:

    I have local access to testing and a dial-in pcAnywhere (don't ask ) connection to production. I'm running the queries through QA locally to rule out network issues. I was onsite yesterday and did that without pcAnywhere, results were consistent. And yes, I'm the only user on the server while testing.

    The test DB is a backup from production, but keep in mind that I'm using SQL2K for testing and SQL7 on production. If the problem persists, I'll setup SQL7 here for testing, but I've to free a machine for that and have to check with my co-worker first (read: on monday:/).

    Database sizes are about the same and I didn't do any optimizing on testing so far.

  7. #7
    Join Date
    Sep 2003
    Posts
    522
    apples and oranges! performance on 7.0 cannot (i really hope i am right here ) be expected to be the same as on 2k simply because the optimizer on 2k is by far more superior than the one on 7.0

  8. #8
    Join Date
    Mar 2002
    Location
    Bielefeld, Germany
    Posts
    69
    I know, I know, but atm, I've no other option

    Nevertheless, a simple SELECT bla FROM blubb with approx. 100,000 rows really should NOT take ~40 secs when nothing else is happening on the server. Or am I totally wrong there (I wouldn't mind to convince the cmr to upgrade their hardware and/or software)

  9. #9
    Join Date
    Sep 2003
    Posts
    522
    40 seconds before the last row is retrieved or 40 seconds before you see the first row? big difference there.

  10. #10
    Join Date
    Mar 2002
    Location
    Bielefeld, Germany
    Posts
    69
    <note to self>Blah, first think, then post</note>
    40 sec til the last row is fetched.

    I did some double-checking, though. I've some other tables with serveral 100k rows and did some testing with them. Results are concistent, i.e. execution time is proportional to no. of rows. If I'm not totally braindead yet (what may be the case), that means that I've either some kind of hardware problem including "it's plain to slow for our purpose" or a general setup/database file/size/configuration problem, but no problem with that specific table. I *know* that execution time was better a while ago as I did some testing on that specific table as it is needed *very* frequently. Execution time was around 4 to 6 secs. with around 50k rows for that SELECT a FROM b. (just checked my docs).
    Generally, after some more fiddling with top x percent and the like, I'm now quite sure that execution time depends primarely on the amount of data transfered. select top 10 percent takes around 5 secs., 50 percent around 25secs and a select with a where clause that returns exactly one row is executed almost instantly. So it seems that I've to look further on the hardware side.
    Last edited by chrisp_999; 02-08-04 at 16:08.

  11. #11
    Join Date
    Feb 2002
    Posts
    2,232
    What is the performance difference for both servers when the execution plan indicates a table scan only ?
    If you do not have SQL Server Books Online (BOL) installed - please do so. The majority of questions asked in the SQL Server forum could be eliminated if people had access to bol. BOL

  12. #12
    Join Date
    Mar 2002
    Location
    Bielefeld, Germany
    Posts
    69
    Speaking of time, same relation. ~1 sec on testing, around 45secs. on production.

    Table scan consumes 100% on testing, 99% on production.

  13. #13
    Join Date
    Feb 2002
    Posts
    2,232
    What are the metrics returned from the execution plan for both index/table scan for both machines - such as io/cpu/subtree cost ...? Also, these hard disks have been on for 4 years and never have been replaced ? How many transactions do you process in a month - what is the effective size of the hard disks and how much space is free (and how large are your databases) ? Have you run any raid/hd utilities against your drives ?
    If you do not have SQL Server Books Online (BOL) installed - please do so. The majority of questions asked in the SQL Server forum could be eliminated if people had access to bol. BOL

  14. #14
    Join Date
    Mar 2002
    Location
    Bielefeld, Germany
    Posts
    69
    First of all, thanks for your help. The problem is indeed hard disk performance. I'm now checking this with the onsite hardware guys.

    It was very helpful to present some data that proofs that this was not sql related, so thanks again for your help.

    And sorry for the delayed answer, I had (and am still having) problems to reach this site.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •