Results 1 to 5 of 5
  1. #1
    Join Date
    Jan 2002
    Posts
    189

    Unanswered: Sticking several thousand INSERTS into one call

    I may have been a bit daft here. We've got a thing on our intranet where people can upload CSV data files - contact records and stuff - into our SQL Server. Being a bit of a plank, I just wrote a page that'll loop through the CSV file and fire a stored proc to do the insert for each row it finds.

    These CSV files can be fairly large - ten or fifteen thousand lines, say. My upload page can take several minutes to process one file.

    Now I just wrote it like that because (a) I forgot you can run several SQL statements in one call, and (b) I assumed that, since stored procs are fastest, calling a stored proc several thousand times would be the best way to do it.

    This may not be the case. Does anyone know, before I go changing my code? If I loop through a CSV file, adding INSERT statements to one huge SQL string and then fire that, is it likely to be faster than hitting a stored proc several thousand times?

  2. #2
    Join Date
    Oct 2006
    Location
    CA
    Posts
    210
    Interesting question.

    I don't know how to do it, but it sure seems a natural for an XML transfer, since SQL Server has native XML support. Maybe post a question on that forum too...

  3. #3
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    SQLServer 2005 has XML support. MSSQL 2000 has little XML support.

    Don't concatenate all these records into a single insert. That would be a mess. Either continue to use procedure calls or write code to BCP the file into the database.

    You MAY get a slight improvement by loading the data into a staging table with no indexes, foreign keys, or constraints, and moving all the data from the staging table to production at the end of the process. A general rule of thumb is not to load data directly into production tables.
    If it's not practically useful, then it's practically useless.

    blindman
    www.chess.com: "sqlblindman"
    www.LobsterShot.blogspot.com

  4. #4
    Join Date
    Oct 2006
    Location
    CA
    Posts
    210
    It's clunky but you could create a disk file on the server (using naming that'll avoid conflict) then use BULK INSERT into a temp table, then subsequently do an INSERT INTO yourtable SELECT col1, col2 FROM temptable.

    BULK INSERT will circumvent tons of integrity overhead surrounding each insert, and you avoid the overhead surrounding thousands of network trips.

    BULK INSERT also accepts UNC names so you don't have to create the file on your database server, but you'll have to pass your stored procedure the fully qualified name. Most web servers provide directories for temporary files.
    Last edited by vich; 10-12-06 at 20:08.

  5. #5
    Join Date
    May 2004
    Location
    Seattle
    Posts
    1,313
    I agree with vich, BULK INSERT is the way to go here.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •