Page 1 of 2 12 LastLast
Results 1 to 15 of 23
  1. #1
    Join Date
    Jul 2002
    Location
    Lima - Perú
    Posts
    30

    Cool Unanswered: Format database disk with 64K blocks NTFS

    Hi:

    Somebody tellme that with a format of my database disk with 64K blocks NTFS, i can have better performance, is that true ? there is any problem in SQL with this block size ?

  2. #2
    Join Date
    Nov 2002
    Location
    Jersey
    Posts
    10,322
    ¿Opinión qué?

    Well that was meant to be...

    Say what?
    Brett
    8-)

    It's a Great Day for America everybody!

    dbforums Yak CorralRadio 'Rita
    dbForums Member List
    I'm Good Once as I ever was

    The physical order of data in a database has no meaning.

  3. #3
    Join Date
    Jul 2002
    Location
    Lima - Perú
    Posts
    30

    Question do you understand me

    Somebody tellme that i will've a better performance with formating the disks for database files with 64K blocks size in NTFS file system.
    Questions:

    is that true ?
    there is any problem in SQL with this block size ?

  4. #4
    Join Date
    Nov 2002
    Location
    Jersey
    Posts
    10,322
    Do you mean instead of FAT?

    NTFS is more effecient than FAT

    But this is at the OS level....and I've never seen anything but NTFS...
    Brett
    8-)

    It's a Great Day for America everybody!

    dbforums Yak CorralRadio 'Rita
    dbForums Member List
    I'm Good Once as I ever was

    The physical order of data in a database has no meaning.

  5. #5
    Join Date
    Sep 2003
    Posts
    364
    Theoretically, it makes since. What I've been taught is larger block allocations should be faster at the expense of wasted disk space. Hence the smaller the files stored on the disk the larger amount of wasted space to store the file. Smaller block allocations better utilize space at the expense of slower read/write performance. It would be interesting to see if it provides any improvement. I've been wanting to test this for a while but have never had a chance. It would be interesting to see the results from different block sizes on one and multiple hardrive's with different RAID configurations. Especially, if you were to compare read and write performance.

  6. #6
    Join Date
    Sep 2003
    Posts
    364
    Just talked to my MS insiders. They said SQL Server writes in 8k chunks so the larger allocation on the disk won't matter as far as SQL Server is concerned. However, it'll help performance for the OS and pagefile.

    So I guess anything >=8k shouldn't matter to SQL.

  7. #7
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662
    SQL does not write directly to the disk. Everything is preallocated into 64K extents. The only "writing" occurs to the memory in that amount (8K).
    "The data in a record depends on the Key to the record, the Whole Key, and
    nothing but the Key, so help me Codd."

  8. #8
    Join Date
    Jul 2002
    Location
    Lima - Perú
    Posts
    30

    Smile I'm agree but...

    I'm agree with you, but, when SQL Server read or write an extent from/to the memory to/from disk due to a checkpoint or page fault, it will be working with 8 page's of 8k everyone(page size of data for SQL Server). That not reduce tha number of I/O ???.


  9. #9
    Join Date
    Jul 2002
    Location
    Lima - Perú
    Posts
    30
    I'm agree with you, but, when SQL Server read or write an extent from/to the memory to/from disk due to a checkpoint or page fault, it will be working with 8 page's of 8k everyone(page size of data for SQL Server). That not reduce tha number of I/O ???.

  10. #10
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    Personally, I would not set the allocation size of the disk to 64K. The main reason is that you can (or at least should) have queries that are only interested in single pages, or less than 8 pages at a time. Notably, code tables that store data for integrity purposes that would be stored in mixed extents. Also, some index seeks should cause single pages to be read into memory. This would cause the O/S to read 56K of extra data, every time you do one of these index seeks. I am not sure what the writing side of it would be, but I suspect it would be just as bad. A general rule of thumb from the Oracle side of the world is if you believe it will really help you (i.e. you really have major performance problems that can not be solved otherwise), you should make the O/S allocation size equal to the database block size. Hope this helps.

  11. #11
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662
    Quote Originally Posted by MCrowley
    Also, some index seeks should cause single pages to be read into memory. This would cause the O/S to read 56K of extra data, every time you do one of these index seeks.
    Pages read don't have anything to do with allocation size. If optimizer needs to put 8K worth of data into cache, - only 1 page will be put into cache, not 8 (8K X 8=64K=EXTENT=Suggested Allocation size)

    And if you really want to optimize the performance of a SQL box by messing with allocation size, - set the file growth to the same number as allocation unit size. Preallocate the database files to the multiple of allocation unit size by using the formula:

    cast((file size + allocation unit -1) / allocation unit as int) * allocation unit
    "The data in a record depends on the Key to the record, the Whole Key, and
    nothing but the Key, so help me Codd."

  12. #12
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    Quote Originally Posted by rdjabarov
    Pages read don't have anything to do with allocation size. If optimizer needs to put 8K worth of data into cache, - only 1 page will be put into cache, not 8 (8K X 8=64K=EXTENT=Suggested Allocation size)
    True, but you are likely to spend more time reading or updating the data than inserting it. If the disk allocation size were 64K, the disk is doing a lot more work than it has to for each physical read.

    Think of it like Costco or BJ's Wholesaler. You might want a single can of coke, but your only option at Costco is to buy the whole case, because that is the smallest allocation size. You can toss 23 of the cans, since you are only interested in one, but it still causes you more work than before. (somehow in the back of my mind, I just know this metaphor is going to bite me but good)

    Also, with an 8K allocation size, you can turn off the torn page detection option on your databases. I have never turned that off, so I am not sure how much you would save by doing that.

  13. #13
    Join Date
    Nov 2002
    Location
    Jersey
    Posts
    10,322
    Damn....now I really HAVE to order Kalens book...

    And don't you just love his disposition!

    Good stuff guys
    Brett
    8-)

    It's a Great Day for America everybody!

    dbforums Yak CorralRadio 'Rita
    dbForums Member List
    I'm Good Once as I ever was

    The physical order of data in a database has no meaning.

  14. #14
    Join Date
    Jun 2004
    Location
    Far away from heaven
    Posts
    194
    I also read this:
    SQL Server performance can be improved when databases are created on NTFS volumes with a 64-KB extent size.
    Reference: SQL SERVER 2000 SYSTEM ADMINISTRATION (microsoft press).


    64 is the max size in the list when formatting a drive in WIN/NT.
    Why the large size is recommended in the reference then?

  15. #15
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    Dang! I could be very wrong, then. Maybe I will have to pick up that book, today. I will have to see if BOL has that, too.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •