Results 1 to 8 of 8
  1. #1
    Join Date
    Dec 2002
    Location
    Vancouver, BC
    Posts
    35

    Unanswered: Extremely slow INSERT into large table 3.8million rows

    Hi there...

    We have a customer who has a very basic session tracker query in a Cold Fusion page. The page runs a SQL 2000 Store procedure that inserts 3 values onto this table that has 3.8 million rows. When the site
    gets busy, Cold Fusion starts crashing, it complains that these CF pages
    are taking 60s to run (all it does is this basic insert).

    The only way I could see this taking a long time is if the table is stored in some sort of sorted heap file on disk.

    Any ideas on how to speed this up?

    The only thing I could think of is to use a temporary table (a buffer) that
    has a trigger on it... The trigger will check the size of the table on every
    INSERT and if the size reaches 10,000 it will transfer the contents to the very large table and clear itself for future inserts.

  2. #2
    Join Date
    Feb 2002
    Posts
    2,232
    What is(are) the index(es) on this table ? Can you be more specific with the coldfusion errors ?

  3. #3
    Join Date
    Dec 2002
    Location
    Vancouver, BC
    Posts
    35
    This is coming from memory, but I believe there is a cluster index on the primary key of the table.

    I actually don't see the Cold Fusion errors, but we log all pages that take longer than 30s to complete. Just before the server becomes unresponsive, the logs is full of this customers .cfm page saying that it has been taking over 60s to complete these page requests. I shut down the site in IIS and the server recovers.

    If I retained anything from my DBMS course at school, a clustered index means that the data file is sorted on the key value of the index. I never designed their database, but at a quick glimpse I saw that the PK was just an incrementing integer and I don't even think it was a FK to any other relation. Could this index be the culprit?

    Thanks...

  4. #4
    Join Date
    May 2002
    Location
    Timbaktu
    Posts
    185

    Indes is fine

    for a table with Inserts, Index is not a bad option.
    You may want to change the Index.If you could make the Index as the unique Index then it might help, but again it depends upon your case.We have a table with 12 million rows and the data is getting inserted every 10 seconds and the response has never been bad.

    You might also want to create a Historical table and move all the historical data to the new table, this way the number of rows will be decreased in the actual table.

    Also try to lower the locking level on this table for all other queries other than the insert. We are using "Select * from Table A with (NoLock)" as nolock doesn't hold any lock on the table.

    Hope this helps!

  5. #5
    Join Date
    Dec 2002
    Location
    Vancouver, BC
    Posts
    35
    The index does have "Unique Values" selected. I don't see how INSERTS on an index can be the fast way to do inserts. For every insert it will have to traverse the tree (assuming a B+ index), while with a non indexed, non clustered file the insert would just append to the end of the datafile. Now a hash index I could see being fast, since it would be one lookup in the directory and then one more IO to bring up the page that the data is to be inserted in.

    My theory is now leading the their transaction log now. I checked it out and the file size for the transaction log was 1.1GB!!! Unfortunately my knowledge on transaction implementation is very limited so I am not sure how iterating through that file is done. I've trimmed the transaction log file back, and set it so it never gets larger than 50MB.

    I don't want to start the site up yet, unless I know this solution is good. I've contacted the web developers regarding this and they don't have a clue. They wanted SQL Server because they heard it was fast, but have not used any sort of advanced features that the DBMS provides.

    Thanks for your advice thus far.

  6. #6
    Join Date
    May 2002
    Location
    Timbaktu
    Posts
    185

    setting transaction Log

    to not grow beyond 50 MB is not a good idea I guess.
    How often do you take the Transaction log backup?If it's not frequently then let it grow automatically otherwise the insert will start failing.
    What kind of hardware arrangement you have got?RAID-5,10??

    One thing I'm sure of -1 GB of Log file has got nothing to do with the slow Insert.SQL Server automatically moves the data from the log file to the disk or at every check point.

  7. #7
    Join Date
    Dec 2002
    Location
    Czech Republic
    Posts
    249

    Re: setting transaction Log

    In SQL Server Enterprise Manager
    right click on table and select from popup menu
    All tasks->Generate SQL script->Options
    Select all in "Table scripting options" section.
    In "File options" select ANSI, then click OK.
    Select file name to store info in.
    This file includes information about indexes,
    constraints,foreign keys and triggers on table.
    (You can also script full database by selecting
    more options.)

    If you want information about sql server setting,
    run SQL query
    SELECT * FROM MASTER.DBO.SYSCURCONFIGS
    or
    EXEC sp_configure
    and save results to file.
    Some information is stored in registry keys
    "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microso ft SQL Server"
    "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer "
    and their subkeys, use REGEDIT to export.

    All data can be security sensitive.
    Select some and send it to forum.
    I think that table info would be enough.

  8. #8
    Join Date
    Nov 2011
    Posts
    1

    Slow Inserts

    I notice this question is ages old, but insert performance is a never-ending issue. Here are some ideas on what might be worth a try:

    • Choose the target table's clustered index wisely, so that inserts won't lead to clustered index node splits. Usually an identity column is a good choice
    • Place database datafile and logfile on two physically separated devices
    • Let the client insert into a temporary heap table first (heap tables don't have any clustered index); then, issue one big "insert-into-select" statement to push all that staging table data into the actual target table
    • Limit the number of indices on the target table (as well as check constraints, indexed views, etc)
    • Place indices on master table columns referenced by the target table's foreign keys
    • Decrease transaction logging by choosing bulk-logged recovery model
    • Prefer server-side processsing (e.g. by means of "insert-into-select") to client-to-server-roundtrips wherever possible
    • .NET only: Increase ADO.NET BatchSize or apply SqlBulkCopy


    Taken from: Tips For Lightning-Fast Insert Performance On SqlServer

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •