Results 1 to 9 of 9

Thread: Io_completion

  1. #1
    Join Date
    May 2006
    Posts
    38

    Exclamation Unanswered: Io_completion

    Hi Folks,

    During a run of an update statement, the query gets stuck with a IO_COMPLETION wait . The thing is this is intermittent.

    Sometimes it gets executed within a minute and sometimes it stays in this state for days ( if its not killed earlier)

    The SQL instance is not overloaded and is on a 64-bit machine with over 50 GB of memory. Also, this is the only statement that is getting executed at the time. There are no active processes.

    Last time around, I did a dbcc freeproccache and freesystemcache, this seems to have brought the query state to runnable. It got executed after llike 5-6 hours ..

    I am not looking at query tuning at this moment ( this is being handled separtely ) but want to understand the basics of why this is happening and where do I look for probable root cause

    Do you gurus have any inputs ?


    Thanks for your help!

    Warm Regards,
    RanjitSHans

  2. #2
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662
    IO_COMPLETION is exactly what it is, - SQL (user-mode) thread is waiting on the kernel thread to return the status of IO request. To provide more detailed info, you need to know version of OS and SQL, the underlying IO subsystem configuration (local/external drives, RAID configuration, drive specifications, controller specs), CPU affinity and light-weight settings (and I hope Boost is not turned on). It would also be interesting to know if the query you're referring to is the only one that experiences this behavior. And of course, what is the query? Is it simple, complex, "action", or "read"?
    "The data in a record depends on the Key to the record, the Whole Key, and
    nothing but the Key, so help me Codd."

  3. #3
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    With the normal execution time weighing in at 5 - 6 hours, I would have to think that this is a complex query. The fact that running DBCC FREEPROCCACHE relieves the problem, indicates that there are probably several ways of completing this update, with some of the plans being better for some distributions of data than others. How big are the underlying tables, and how often is this update run?

  4. #4
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662
    Actually, any attempt to clear the data buffer or procedure cache will have 0 affect on which plan is being executed. If the query is running, its plan will stay in memory until the execution is completed or recompile is encountered by means of the structure of the query.
    "The data in a record depends on the Key to the record, the Whole Key, and
    nothing but the Key, so help me Codd."

  5. #5
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    If the query is currently running, yes, but if you run DBCC FREEPROCCACHE between runs, then you will force a compilation of the statement.

  6. #6
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662
    The OP ran FREEPROCCACHE while the query was running for 5-6 hours (see first message above), and the state turned "Runnable". This means that another process's execution might have changed, not the query that the OP was running.
    "The data in a record depends on the Key to the record, the Whole Key, and
    nothing but the Key, so help me Codd."

  7. #7
    Join Date
    May 2006
    Posts
    38
    Hi All,

    Thanks for taking a look into this one. I am very grateful for your inputs. This is actually a SQL 2005 EE 64-bit on Win 2k3 server with SAN drives ( split tempdb etc ) Boost is off ..

    This is an update statement from a 11 mill row table self joining itself based on a certain criteria and updating around 1.5 mill of rows

    This is the only SQL statement that is being run and yes when I ran DBCC freeproccache, I could see it getting runnable ( I will need to do a trace to see if its getting recompiled or if it was just part of its normal process) ..

    Coming back to my issue, is there any way I can clear out / flush out / page out the existing data cache ?

    That way I am hoping SQL will start with a clean slate and bring the data from the update statement into its data cache

    Any other input or workarounds is most welcome

    Thanks in advance

    Warm Regards,
    RanjitSHans

  8. #8
    Join Date
    Nov 2005
    Posts
    122
    My first guess is that there are hash or sort spills to tempdb. This causes a lot of activity on the disk and is why you are seeing IO_COMPLETION and the session being suspended.

    Check the row in sys.dm_db_session_space_usage (or was it sys.dm_db_task_space_usage) for the session and see if internal_objects_alloc_page_count/internal_objects_dealloc_page_count is increasing. This indicates that the sort or hash operations can't be performed in memory (due to SQL Server underestimating what it needs) and is doing the sort/hash in tempdb.

    How many CPUs/tempdb files do you have, and are the tempdb files on the same volume? The best practice advice telling you to have one tempdb data file per CPU core is not always true when it comes to sort/hash spills to tempdb. It can actually decrease performance by many 100%.

    I recommend viewing the webcasts at Truly Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Workshops & Webcasts - Home for more details. He explains and shows examples of IO_COMPLETION waits, hash/sort spills to tempdb, and other performance issues. Perhaps this can help you solve your problem.

  9. #9
    Join Date
    May 2006
    Posts
    38

    Thumbs up

    Thanks for the wonderful links kaffenils. Yes, this seem to have been caused by hash and sort spills to tempdb.

    One of the temporary workaround was restarting SQL to initialize tempdb. It seemed to work and the job completed within expected timelines.

    For a permanent solution, I am thinking along the lines of splitting the updates into parts so that a smaller amount of data is updated and avoid tempdb spills.

    Do let me know your inputs if any.

    Thanks once again for everything!


    Warm Regards,
    Ranjit S Hans

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •