Page 1 of 2 12 LastLast
Results 1 to 15 of 26
  1. #1
    Join Date
    May 2003
    Posts
    4

    Unanswered: Hughe logfile after a DELETE

    Hi

    I have a business case where I have a database with a size of 2 GB. In my business case I want to delete 70% of the contents in the database. During the delete the size of the database(including the logfile) grows up to 10 GB. When I look on the database in the SQL Server Enterprice Manager there are only 700 MB used of these 10 GB.

    The deletes are performed with ordinary SQL-statements "DELETE FROM X WHERE Y = 1 AND Z = 2..."

    Thankfull for hints on how to solve this problem.

    Regards,
    Peter

  2. #2
    Join Date
    Jan 2004
    Location
    Romania - Bucharest
    Posts
    50
    Perheaps it should be easier to copy the remaining records into a temporary table, drop your first table, recreate it and then copy again the rows from the temporary table to the destination table.

  3. #3
    Join Date
    Jul 2002
    Location
    Village, MD
    Posts
    621

    Re: Hughe logfile after a DELETE

    Originally posted by peterand
    Hi

    I have a business case where I have a database with a size of 2 GB. In my business case I want to delete 70% of the contents in the database. During the delete the size of the database(including the logfile) grows up to 10 GB. When I look on the database in the SQL Server Enterprice Manager there are only 700 MB used of these 10 GB.

    The deletes are performed with ordinary SQL-statements "DELETE FROM X WHERE Y = 1 AND Z = 2..."

    Thankfull for hints on how to solve this problem.

    Regards,
    Peter
    Every delete command is a transaction that is saved in transaction log (even for simple model). If you delete big amount of data by one command - size of your transaction log will be at least equal or more than delete transaction. To solve this problem you could delete data by portions (by date, etc.) and every checkpoint will free space in transaction log. After all you could shrink your database/log files.

  4. #4
    Join Date
    Oct 2003
    Posts
    58

    Question

    Are there any triggers on the tables?

  5. #5
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662

    Re: Hughe logfile after a DELETE

    Originally posted by peterand
    Hi

    I have a business case where I have a database with a size of 2 GB. In my business case I want to delete 70% of the contents in the database. During the delete the size of the database(including the logfile) grows up to 10 GB. When I look on the database in the SQL Server Enterprice Manager there are only 700 MB used of these 10 GB.

    The deletes are performed with ordinary SQL-statements "DELETE FROM X WHERE Y = 1 AND Z = 2..."

    Thankfull for hints on how to solve this problem.

    Regards,
    Peter
    I would use BCP...QUERYOUT for this, and in the WHERE clause I'd filter in only records that I need. Once it's done, - I'd truncate the table (non-logged operation) and BULK INSERT (also non-logged) back into the table. The above process is much faster and does not consume any log, unless your db is in Bulk-Logged Recovery mode. Make sure to do a full backup after all this if your db is in Full Recovery mode. Select Into/Bulk Insert db option should also be turned on for this to work.

  6. #6
    Join Date
    Jul 2002
    Posts
    58

    Re: Hughe logfile after a DELETE

    Originally posted by rdjabarov
    I would use BCP...QUERYOUT for this, and in the WHERE clause I'd filter in only records that I need. Once it's done, - I'd truncate the table (non-logged operation) and BULK INSERT (also non-logged) back into the table. The above process is much faster and does not consume any log, unless your db is in Bulk-Logged Recovery mode. Make sure to do a full backup after all this if your db is in Full Recovery mode. Select Into/Bulk Insert db option should also be turned on for this to work.
    Acutally, just to set the record straight, truncating a table IS a logged operation, and WILL consume log space. You can prove this by running this ...

    USE PUBS
    GO
    BEGIN TRAN
    TRUNCATE TABLE sales
    SELECT * FROM sales
    ROLLBACK TRAN
    SELECT * FROM sales
    GO

    You'll see that the truncate command is rolled back and all the rows restored in the second select, which would not be possible if truncate was a non-logged operation. IF you still doubt it, do this on a test machine and between the first select and the rollback, cold-boot the server! After the machine comes back up and SQL Server runs recovery all PUBS, all the rows will be restored. [Note: only for the brave and test machines, as nasty things can happen when the power drops. Usually safe, but definitely playing with matches]

    With truncate table, what's logged is the removal of entire pages, not individual rows, and thus turncating tables generally takes up much less log space.

  7. #7
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662
    Stay corrected, it's the page removal rather than row-by-row.

    But then how do you explain this anamoly?

    Code:
    -- Create a file called tmp.txt in the root of C: drive and put something in it
    --Then, run the following:
    create table test (field varchar(8000) null)
    go
    begin tran
    bulk insert test from 'c:\tmp.txt'
    rollback tran
    go
    select * from test
    go
    drop table test
    go

  8. #8
    Join Date
    Jul 2003
    Location
    The Dark Planet
    Posts
    1,401
    Originally posted by rdjabarov
    Stay corrected, it's the page removal rather than row-by-row.

    But then how do you explain this anamoly?

    Code:
    -- Create a file called tmp.txt in the root of C: drive and put something in it
    --Then, run the following:
    create table test (field varchar(8000) null)
    go
    begin tran
    bulk insert test from 'c:\tmp.txt'
    rollback tran
    go
    select * from test
    go
    drop table test
    go
    I did not understand ... whats the anamoly
    From the Holy Book
    The BULK INSERT statement can be executed within a user-defined transaction. Rolling back a user-defined transaction that uses a BULK INSERT statement and BATCHSIZE clause to load data into a table or view using multiple batches rolls back all batches sent to SQL Server.
    Get yourself a copy of the The Holy Book

    order has no physical Brett in The meaning of a Kaiser . -database data

  9. #9
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662
    OK, so who can explain the term "Non-logged operation" and who can give an example of a non-logged command? Because the argument that was presented a couple of posts earlier questioned this particular term by transactionalizing TRUNCATE TABLE, which can be rolled back. For that matter, DROP <OBJECT_TYPE> <OBJECT_NAME> can also be rolled back. So what is "Non-logged operation"???

  10. #10
    Join Date
    Jul 2003
    Location
    The Dark Planet
    Posts
    1,401
    Get yourself a copy of the The Holy Book

    order has no physical Brett in The meaning of a Kaiser . -database data

  11. #11
    Join Date
    Jul 2002
    Posts
    58
    Originally posted by rdjabarov
    OK, so who can explain the term "Non-logged operation" and who can give an example of a non-logged command? Because the argument that was presented a couple of posts earlier questioned this particular term by transactionalizing TRUNCATE TABLE, which can be rolled back. For that matter, DROP <OBJECT_TYPE> <OBJECT_NAME> can also be rolled back. So what is "Non-logged operation"???
    From SQL Server 2000 Administrator's Companion

    "Because nonlogged operations are not recorded in the transaction log, you much redo them if a recovery is necessary. Therefore, you should carefully consider the effects of enabling nonlogged operations before you do so. The operations that can be performed as nonlogged operations are as follow:

    *SELECT INTO
    *BULK COPY and Bulk Copy Program (BCP)
    *CREATE INDEX
    *Certain text operations"

    A few paragraphs later, it is explained that the non-logged text operations are WRITETEXT and UPDATETEXT.

  12. #12
    Join Date
    Jul 2003
    Location
    The Dark Planet
    Posts
    1,401
    From Books online
    When using the full recovery model, all row-insert operations performed by bcp are logged in the transaction log. For large data loads, this can cause the transaction log to fill rapidly. To help prevent the transaction log from running out of space, a minimally logged bulk copy can be performed if all of these conditions are met:

    The recovery model is simple or bulk-logged.


    The target table is not being replicated.


    The target table does not have any triggers.


    The target table has either 0 rows or no indexes.


    The TABLOCK hint is specified. For more information, see Controlling the Locking Behavior.
    Any bulk copy into an instance of Microsoft® SQL Server™ that does not meet these conditions is logged.
    Get yourself a copy of the The Holy Book

    order has no physical Brett in The meaning of a Kaiser . -database data

  13. #13
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662
    ...and if SELECT INTO is "Non-Logged", then how can it be rolled back? And if it can, then what's wrong with stating that TRUNCATE is "Logged"?

    begin tran
    select * into test1 from test2
    rollback tran
    --TEST1 table DOES NOT EXISTS!!!

  14. #14
    Join Date
    Jul 2002
    Posts
    58
    Originally posted by rdjabarov
    ...and if SELECT INTO is "Non-Logged", then how can it be rolled back? And if it can, then what's wrong with stating that TRUNCATE is "Logged"?

    begin tran
    select * into test1 from test2
    rollback tran
    --TEST1 table DOES NOT EXISTS!!!

    Is your DB in BULK_LOGGED recovery mode? If it's not, then EVERYTHING is logged and EVERYTHING can be rolled back.

  15. #15
    Join Date
    Jul 2003
    Location
    The Dark Planet
    Posts
    1,401
    Originally posted by Steve Duncan
    Is your DB in BULK_LOGGED recovery mode? If it's not, then EVERYTHING is logged and EVERYTHING can be rolled back.
    In a Bulk-Logged Recovery model, the data loss exposure for these bulk copy operations is greater than in the Full Recovery model. While the bulk copy operations are fully logged under the Full Recovery model, they are minimally logged and cannot be controlled on an operation-by-operation basis under the Bulk-Logged Recovery model. Under the Bulk-Logged Recovery model, a damaged data file can result in having to redo work manually.

    In other words ... minimally logged and non -logged are not the same

    Edit : Though its time for me to sleep .. dont want to leave this discussion in the middle
    Get yourself a copy of the The Holy Book

    order has no physical Brett in The meaning of a Kaiser . -database data

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •