Results 1 to 5 of 5
  1. #1
    Join Date
    May 2002

    Unanswered: transactional log

    Hello folks,

    I have SQL Server and it has a database where we insert record like 1,000,000 per hour.

    Last days I had problems as transactional log gets full very fast and of course, no inserts by this time.
    I run truncation/backup of transactional log once at night.

    what I can do to avoid the transactional log to get full with such speed?


  2. #2
    Join Date
    Nov 2002
    uhh..dump the logs every 10 minutes?

    What kinda volume is 1 mil an hour?

    Do they have to be inserts?

    It's a Great Day for America everybody!

    dbforums Yak CorralRadio 'Rita
    dbForums Member List
    I'm Good Once as I ever was

    The physical order of data in a database has no meaning.

  3. #3
    Join Date
    Sep 2003
    if it's an insert, then try to use ranges of records if possible, in addition to firing checkpoint after each range (for that the security context has to belong to db_owner role). range can be based on id, date, or any other identifier that can segregate rows so that you don't re-insert the same row twice.

  4. #4
    Join Date
    May 2002

    transactional log

    It can not be by ranges. There is an application which is used by users and they can insert whenever they want.

    It can be only insert as after few seconds or maximum 4 minutes the records are deleted.

  5. #5
    Join Date
    Oct 2001
    Naples, FL
    Do you require point in time recovery? If not, use simple recov mode.

    Also, with that number of inserts and deletes, make sure you have a clustered index on your tables, preferably on identity columns, heaps reclaim empty space which will hurt you with that number of trans.

    Ray Higdon MCSE, MCDBA, CCNA

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts