Results 1 to 9 of 9
  1. #1
    Join Date
    Apr 2007
    Posts
    15

    Unanswered: Need delay in my bulk insert statement

    I am working on a bulk insert statement. The stupid primary key, though, uses the Unix Epoch.. which means that I need some sort of "Delay" in my statement, so that each row will have a unique primary key.

    Any ideas on creating a delay during my update?

  2. #2
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    If I read this right, the primary key is based on a datetime field (probably based on getdate()). If so, you are going to be a lot better off using either a GUID column, or an identity column for exactly the reasons you have just discovered. If this can not be done, then you will have to give up the bulk insert idea, because it is too fast. You will have to gin up some VB script to get the records in with enough delay (3 ms) between records.

  3. #3
    Join Date
    Feb 2004
    Location
    One Flump in One Place
    Posts
    14,912
    ....or bulk insert into an unconstrained staging table first....
    Testimonial:
    pootle flump
    ur codings are working excelent.

  4. #4
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    Inserting from one table to the next is likely to be as quick as a bulkinsert. The design needs to be revisited, if this is a common occurrence.

  5. #5
    Join Date
    Apr 2007
    Posts
    15
    What if I create some kind of identity in a temp table or view, so that only the first "time" is grabbed, and then all other records get the identity seed 1?

    It dies on line 3 on @key.


    DECLARE @key int
    SET @key = datediff(s,'19700101',getdate())
    DECLARE @tab table(UniqueKey int identity (@key,1), F_PART varchar(200), F_NAME varchar(200), F_USER varchar(100) )
    INSERT INTO @tab
    SELECT F_PART, F_NAME, F_USER
    FROM PART_DAT
    SELECT * FROM @tab
    Last edited by simp1eton; 10-28-08 at 12:03.

  6. #6
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    If this is SQL 2005, you can just use the ROW_NUMBER() function
    Code:
    SELECT datediff(s,'19700101',getdate()) + ROW_NUMBER() over (order by F_PART), F_PART, F_NAME, F_USER
    FROM PART_DAT

  7. #7
    Join Date
    Apr 2007
    Posts
    15
    Nah, we're on SQL 2000.
    So I'm gonna have to use the 2000 row count hack.
    Looks like it's going to work ok, except that i'm subtracting the 3720 since it gets added based on the row number.

    Code:
    SELECT F_PART, F_NAME, F_USER, 
    (SELECT datediff(s,'19700101',getdate()) - 3720 
    + COUNT(*) FROM PART_DAT e2 
    WHERE e2.F_PART <= e.F_PART) AS UniqueKey, 
    FROM PART_DAT e

  8. #8
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    Quote Originally Posted by MCrowley
    Inserting from one table to the next is likely to be as quick as a bulkinsert. The design needs to be revisited, if this is a common occurrence.
    But he could loop through the staging table with a cursor.
    Wait....I feel like I am going to throw up now.....

    ....OK. I feel better.
    If it's not practically useful, then it's practically useless.

    blindman
    www.chess.com: "sqlblindman"
    www.LobsterShot.blogspot.com

  9. #9
    Join Date
    Jun 2004
    Location
    Long Island
    Posts
    696
    Could always generate individual insert statements, ugly. This is how Embardcadero's DBArtisan handles bulk loads (when you use their tool).

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •