Unanswered: Extremely slow INSERT into large table 3.8million rows
We have a customer who has a very basic session tracker query in a Cold Fusion page. The page runs a SQL 2000 Store procedure that inserts 3 values onto this table that has 3.8 million rows. When the site
gets busy, Cold Fusion starts crashing, it complains that these CF pages
are taking 60s to run (all it does is this basic insert).
The only way I could see this taking a long time is if the table is stored in some sort of sorted heap file on disk.
Any ideas on how to speed this up?
The only thing I could think of is to use a temporary table (a buffer) that
has a trigger on it... The trigger will check the size of the table on every
INSERT and if the size reaches 10,000 it will transfer the contents to the very large table and clear itself for future inserts.
This is coming from memory, but I believe there is a cluster index on the primary key of the table.
I actually don't see the Cold Fusion errors, but we log all pages that take longer than 30s to complete. Just before the server becomes unresponsive, the logs is full of this customers .cfm page saying that it has been taking over 60s to complete these page requests. I shut down the site in IIS and the server recovers.
If I retained anything from my DBMS course at school, a clustered index means that the data file is sorted on the key value of the index. I never designed their database, but at a quick glimpse I saw that the PK was just an incrementing integer and I don't even think it was a FK to any other relation. Could this index be the culprit?
for a table with Inserts, Index is not a bad option.
You may want to change the Index.If you could make the Index as the unique Index then it might help, but again it depends upon your case.We have a table with 12 million rows and the data is getting inserted every 10 seconds and the response has never been bad.
You might also want to create a Historical table and move all the historical data to the new table, this way the number of rows will be decreased in the actual table.
Also try to lower the locking level on this table for all other queries other than the insert. We are using "Select * from Table A with (NoLock)" as nolock doesn't hold any lock on the table.
The index does have "Unique Values" selected. I don't see how INSERTS on an index can be the fast way to do inserts. For every insert it will have to traverse the tree (assuming a B+ index), while with a non indexed, non clustered file the insert would just append to the end of the datafile. Now a hash index I could see being fast, since it would be one lookup in the directory and then one more IO to bring up the page that the data is to be inserted in.
My theory is now leading the their transaction log now. I checked it out and the file size for the transaction log was 1.1GB!!! Unfortunately my knowledge on transaction implementation is very limited so I am not sure how iterating through that file is done. I've trimmed the transaction log file back, and set it so it never gets larger than 50MB.
I don't want to start the site up yet, unless I know this solution is good. I've contacted the web developers regarding this and they don't have a clue. They wanted SQL Server because they heard it was fast, but have not used any sort of advanced features that the DBMS provides.
to not grow beyond 50 MB is not a good idea I guess.
How often do you take the Transaction log backup?If it's not frequently then let it grow automatically otherwise the insert will start failing.
What kind of hardware arrangement you have got?RAID-5,10??
One thing I'm sure of -1 GB of Log file has got nothing to do with the slow Insert.SQL Server automatically moves the data from the log file to the disk or at every check point.
In SQL Server Enterprise Manager
right click on table and select from popup menu
All tasks->Generate SQL script->Options
Select all in "Table scripting options" section.
In "File options" select ANSI, then click OK.
Select file name to store info in.
This file includes information about indexes,
constraints,foreign keys and triggers on table.
(You can also script full database by selecting
If you want information about sql server setting,
run SQL query
SELECT * FROM MASTER.DBO.SYSCURCONFIGS
and save results to file.
Some information is stored in registry keys
"HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microso ft SQL Server"
and their subkeys, use REGEDIT to export.
All data can be security sensitive.
Select some and send it to forum.
I think that table info would be enough.
I notice this question is ages old, but insert performance is a never-ending issue. Here are some ideas on what might be worth a try:
Choose the target table's clustered index wisely, so that inserts won't lead to clustered index node splits. Usually an identity column is a good choice
Place database datafile and logfile on two physically separated devices
Let the client insert into a temporary heap table first (heap tables don't have any clustered index); then, issue one big "insert-into-select" statement to push all that staging table data into the actual target table
Limit the number of indices on the target table (as well as check constraints, indexed views, etc)
Place indices on master table columns referenced by the target table's foreign keys
Decrease transaction logging by choosing bulk-logged recovery model
Prefer server-side processsing (e.g. by means of "insert-into-select") to client-to-server-roundtrips wherever possible
.NET only: Increase ADO.NET BatchSize or apply SqlBulkCopy