Results 1 to 3 of 3
  1. #1
    Join Date
    Mar 2002
    Westfield, NJ, USA

    Question Unanswered: Recovery stopped at 3/4 by locks. Now what?

    Spid 14 is blocking spid 7 which in turn is blocking spid 51 and as a result spid 52 is locked on trying to run TSQL command batch: "dbcc inputbuffer(52)".

    I have no idea what I am talking about here.
    No, there is no backup and unfortunatelly that much I know.

    It all happened during indexing.
    "mdf" data size is 18 GB - if this makes any difference...

    Pls help!!!!!!!!

  2. #2
    Join Date
    Dec 2001
    Toronto, Canada
    Are you trying to create indexes while users are connected to the server? If so you should what till after hours.

    Also you had an earlier post on creating indexes. If I can remember correctly you said something about creating an index on each single field. If this is correct please review your needs for this many indexes. I truly believe that you do not need this many indexes. Some of these things you may already know, but I am just stating them for others that may read this post.

    You should create your clustered index first prior to any other index. If done after other indexes then all indexes will be recreated since the cluster will reorganize you table order. Also depending on how the table is used (read, write) and time constraints to create a clustered index you may opt not to have a clustered index, this will reduce creation time, but could leave gaps in your data pages if the data is updated or deleted.

  3. #3
    Join Date
    Mar 2002
    Westfield, NJ, USA

    Red face "Welcome to the machine".

    Achorozy, thanx for your input.

    You are right, having some 100 fields of 6 -10 million, 450 bytes records indexed, sounds and is kinda absurd.

    However, this is just the first (and turns out somewhat rocky) stage of the project, meant to get the feel for the data - in turn to help formulate
    criteria based on which real project will be build, very possibly with fewer fields and fewer indexes but certainly with way more records.
    In any event, it will very likely be one large table, to be read only.
    We need every field indexed just for now, simply to get the unique (distinct) values and perhaps some other ways to query them.
    I donít think clustered indexes would be of any help now, but I've been known to be wrong of course - since this is my first real brush with SQL Server as a step-up from friendly Access with heavy VBA.

    Books, gut feeling, various online help sites and this forum are my only resources.

    At the moment I donít have any other tools besides SQL S. to just analyze the data of this size.

    Right now, while I am fumbling with it, nobody is connected to the server since it is dedicated to this project.

    I am prepared to import and re-index data once more since it appears that due to spidís locks during indexing (ever-present Cpt Murphy's spirit or whatever) database appears corrupt and SQL Server stops restoration at magical 64% and the fact that I am yet too ignorant to resolve this various spidís blocking problem that is preventing SQL Server to fully restore it.

    Yes, before indexing I sould have backed it up; I did not and now I am likely to pay the price to do data import first and indexing over again and perhaps to learn some more.

    "Experience is what we get if we did not get what we wanted"

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts