Page 1 of 2 12 LastLast
Results 1 to 15 of 22
  1. #1
    Join Date
    Oct 2003
    Posts
    7

    Unanswered: Duplicating Table Data

    I have a table in my database, Table 1, which I need to duplicate to Table 2 -- structure, data, everything.

    I tried right-clicking on Table 1 in Enterprise Manager, and selecting "copy", which generated a SQL script that I could use to create Table 2, which is a clone of Table 1. Then I tried the following SQL:

    insert into Table2 select * from Table 1

    ...and got the following error message back:

    -----
    An explicit value for the identity column in table 'Table2' can only be specified when a column list is used and IDENTITY_INSERT is ON.
    -----

    What's the best way to create a duplicate table in a database, down to the data in it?

  2. #2
    Join Date
    Sep 2003
    Location
    New York, NY
    Posts
    136
    delete the old table2 you'd created and try using this :
    select * into table2 from table1
    Last edited by rohitkumar; 10-21-03 at 14:06.

  3. #3
    Join Date
    Sep 2003
    Posts
    364
    You could bcp the data too.

  4. #4
    Join Date
    May 2003
    Location
    Epsom, United Kingdom
    Posts
    42
    Originally posted by peterlemonjello
    You could bcp the data too.
    Why?!!

    SELECT * INTO newTable FROM otherTable

    is perfect for what the man needs to do.
    Shadow to Light

  5. #5
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    Do what the error message is telling you to do:
    -------------------------------------------------
    SET IDENTITY_INSERT ON
    GO
    insert into Table2(Field1, Field2,...FieldN)
    select Field1, Field2,...FieldN
    from Table 1
    GO
    SET IDENTITY_INSERT OFF
    GO
    -------------------------------------------------

    "select * into table2 from table1" will work for the data, but you will need to create the indexes and constraints yourself.

    blindman

  6. #6
    Join Date
    Sep 2003
    Posts
    522
    if table1 is farely large (a specific threshold depends on the power of your box, io, and the table structure), doing select * into... is the preferred method over insert into...select * from...

  7. #7
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    Never heard that before. What's the reasoning?

    blindman

  8. #8
    Join Date
    Sep 2003
    Posts
    522
    you should try it sometime then.

  9. #9
    Join Date
    Oct 2003
    Location
    Sweden
    Posts
    45
    I can see how SELECT INTO could have better performance as long as the recovery model isn't set to full.

    Or am I way off target?

  10. #10
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    I have tried it, many times. I was just wondering why Select * Into would be preferred.

    I know that Select Into used to be an unlogged transaction, so that would make it run faster but you would have to backup your database when you were finished processing. SQL Documentation used to say the Insert Into was preferred precisely because Select Into wasn't logged.

    I've looked in recent SQL Books Online documentation and I've been unable to find any indication as to whether Select Into is still an unlogged transaction or how it affects the various recovery models. Do you know anything about this?

    blindman

  11. #11
    Join Date
    Sep 2003
    Posts
    522

    Cool

    the original task was to duplicate the data into another table. if the original table contains significant amount of data, doing select * into new_table will be much faster then doing create new_table...insert new_table select * from old_table. in fact, the difference can be in hours and even days depending on how much data we're talking about. in one of the treads of this forum i also got into an argument with one of the members. i was saying exactly what you suggested earlier, - insert new_table select * from old_table. he recommended me to test the difference and...i was ashamed to see that it was VERY significant. the best thing i could do is appologize for my ignorance

  12. #12
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    Yes, but is that just because it is an unlogged transaction? That's what I'm trying to find out. Have you seen any white papers about it?

    blindman

  13. #13
    Join Date
    Oct 2003
    Location
    Sweden
    Posts
    45
    MSDN (SQL Books online) describes the different recovery models and says that only minimal logging is used by bulk operations (such as SELECT INTO) for the Bulk-Logged recovery model.

    I've not seen any white papers on this, but not logging should give you quite a performance increase I think.

    Cheers,
    Robert
    Originally posted by blindman
    I've looked in recent SQL Books Online documentation and I've been unable to find any indication as to whether Select Into is still an unlogged transaction or how it affects the various recovery models. Do you know anything about this?

  14. #14
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    So under the Full Recovery model, which is what I would expect most production systems to use, all transacations including SELECT INTO are fully logged. The performance boost would seem to be as much a result of the recovery model used as the statement syntax. I would expect SELECT INTO to be no more efficient than INSERT INTO in Full Recovery mode.

    ms_sql_dba, do you remember if the comparison testing you did was on a database set to Full Recovery? I'll have to experiment with this when I get a chance.

    blindman

  15. #15
    Join Date
    Sep 2003
    Posts
    522
    ok, so i took a table of a little under 900K rows and performed 2 tests, using a database with full recovery mode:

    the first was to do insert table_name select * from source_table

    the second was to select * into table_name from source_table

    no order by, group by, o0r where clauses are involved in both cases.

    here are the results (bigtest1 corresponds to the first test, bigtest2 to the second):

    Code:
    Table 'bigtest1'. Scan count 0, logical reads 887310, physical reads 0, read-ahead reads 0.
    Table 'SQLTables'. Scan count 1, logical reads 11048, physical reads 0, read-ahead reads 0.
    
    SQL Server Execution Times:
       CPU time = 12110 ms,  elapsed time = 14269 ms.
    
    (887309 row(s) affected)
    
    
    Table 'bigtest2'. Scan count 0, logical reads 1, physical reads 0, read-ahead reads 0.
    Table 'SQLTables'. Scan count 1, logical reads 11048, physical reads 0, read-ahead reads 0.
    
    SQL Server Execution Times:
       CPU time = 4843 ms,  elapsed time = 6946 ms.
    
    (887309 row(s) affected)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •