Results 1 to 8 of 8

Thread: slow SQLCMD

  1. #1
    Join Date
    Dec 2006
    Posts
    24

    Unanswered: slow SQLCMD

    hi there,
    i have to join 2 databases. One ist hostet on an old MS SQL 2000, the other one ist hosted on a postgres Server. I bought a new MS SQL 2008 Server. Importing Data/Procdures and Tasks from SQL 2000 zo SQL 2008 is easy done.

    To get the postgres data i wrote me a script exporting the tabel structure, converting it to TSQL, exporting the data and generate INSERT Scripts for them.

    It works good. Only for 2 tables i get a weird problem:

    Both Tables are to huge to load the scripts into Management Studio ( one is 70MB, the other one is almost 500MB). Unfortunatley thoe damned Windows Boxes get problems on bigger files

    Well why not using the comandline on a windows Box? So i tried:

    Code:
    sqlcmd -Hmyhost -i my_import_script.sql
    and i have to recognize that sqlcmd is incredible slow, compared to Management-Studio!


    What am i doing wrong (NOCOUNT is ON) ? Do i need to add a parameter? Is there a better CMD-Tool then SQLCMD?

    I won't beleive that something that is trivial to pgsql ( and even mysql) might be such a problem with MS SQL!
    an Apple a day keeps Dr. Watson away !

  2. #2
    Join Date
    Dec 2006
    Posts
    24
    in answer to myself:

    i did put SET NOCOUNT ON into my script, but due to an error of mine i commented it out! Now things work definetly faster! BUT: i would still be interested in an interactive commandline as mysql and postgres do offer!
    an Apple a day keeps Dr. Watson away !

  3. #3
    Join Date
    Nov 2004
    Posts
    1,427
    Provided Answers: 4
    Have a look at the BCP utility
    With kind regards . . . . . SQL Server 2000/2005/2012
    Wim

    Grabel's Law: 2 is not equal to 3 -- not even for very large values of 2.
    Pat Phelan's Law: 2 very definitely CAN equal 3 -- in at least two programming languages

  4. #4
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    If the files are not able to be run by Management Studio, how are you doing the comparison to say that SQLCMD is slower? It sounds like the only tool that has worked for you so far.

    As for importing large amounts of data, you should think about exporting the data as comma, pipe, or tab delimited files, and then using BCP to import them into a SQL Server database. The slowness is in committing each and every single row. If this can be done as a batch, it will be much faster.

    SQLCMD is the interactive command line tool for SQL Server. Just do not supply a script.

  5. #5
    Join Date
    Dec 2006
    Posts
    24
    Quote Originally Posted by MCrowley
    If the files are not able to be run by Management Studio, how are you doing the comparison to say that SQLCMD is slower? It sounds like the only tool that has worked for you so far.
    i do have smaller ones i could test with, but i think the bug was sitting in front ( my errorneous commenting out of NOCOUNT ON)

    exporting the data as comma, pipe
    yep CSV would habe been an alternative. But i am also doing statements for CREATE TABLE. Therefore i have to do a postgres to TSQL type-casting (parse timestamps from postgres zu datetimes on MS SQL,...). So i decided to create INSERTs for IDENTITY INSERT just on the same run! My intention was to give a GO every 1000 Inserts. If i get you right, that won't help and any row will be committed???

    SQLCMD is the interactive command line tool for SQL Server. Just do not supply a script
    Great to learn! Thank you, now i do need a more comfortable SHELL, i think i got to learn Powershell or something like that. CMD is just no fun when you are used to work on bash shells.
    an Apple a day keeps Dr. Watson away !

  6. #6
    Join Date
    Dec 2006
    Posts
    24
    @MCrowley: yep thankyou SQLCMD with PowerShell does what i looked for!
    an Apple a day keeps Dr. Watson away !

  7. #7
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    "GO" is just the standard batch terminator. A "batch" is the basic unit of SQL execution. If a syntax error is detected in a batch, the entire batch is not run.

    SQL Server does implicit commits. This means that any DML statement will commit, whether you tell it to or not. In fact, submitting an explicit commit will generate an error, as there is not explicit begin transaction to refer back to. You could experiment with BEGIN TRANSACTION and COMMIT TRANSACTION statements every 10,000 rows or so. Depending on how big the rows are, of course. That is an experiment I have not done, as yet, so I don't know that it will help too much. Also, dropping indexes may help, if you are pressed for time. I am not sure how SQL Server handles having constant inserts at the end of an index, or when it tries to rebalance the index tree structure.

  8. #8
    Join Date
    Dec 2006
    Posts
    24
    [QUOTE"="M Crowley"]SQL Server does implicit commits. This means that any DML statement will commit, whether you tell it to or not[/QUOTE]

    Thank you again, i didn't know that! Transactions might be anotuer solution. At least i am testing migration right now with an Developer Version of 2008. The new DB Servermachines with the comercial Licences will arive within the next weeks. I plan to have one Sunday to migrate from the elder SQL 2000 to 2008 and mix in the 20 postgres tables i've got. The Step from 2000 to 2008 is easy and as my tests showed me the sqlcmd ( really with nocount on ) are fast enough. After that i just have to add those maintenance and backup things, switch my App-Servers from one DB to another and i am done.

    With our Development-Appserver and the SQL2008 Developer everything is running fine right now! For now it looks to me as if all might be done within 5-6hours.
    an Apple a day keeps Dr. Watson away !

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •