Results 1 to 7 of 7
  1. #1
    Join Date
    Mar 2004
    Posts
    5

    Unanswered: methods for ID's

    Hello,

    I am looking for alternate methods to ID'ing rows in my tables. I have many diferent tables with diferent attributes and rates of inserts, updates, and deletions. I am using MySql/PHP.

    Currently I am using:

    INT AUTO INCREMENT for tables where very few inserts will occur and once the data is in there it will never be deleted.

    DATETIME for tables that are updated continuously and have many inserts and many deletions.

    Both of these types of tables will be used by only one or two users.

    For some new tables that will potentially be used by 100's of users at the same time I am looking for a new method to ID the rows. AUTO INCREMENT doesnt work well as I do not want to have to continuously clean up my tables and the ones connected to them as the numbers get out of wack with deletions. DATETIMES pose the problem of multiple users inserting rows within the same second of eachother and not having a unique ID.

    I was thinking maybe using a VARCHAR and building ID's with microtime and date.

    Any suggestions?????????

  2. #2
    Join Date
    Apr 2002
    Location
    Toronto, Canada
    Posts
    20,002
    numbers get out of whack with deletions???

    if the gaps in the numbers caused by deletions bother you, you need to reconsider why

    the purpose of an auto_increment primary key is to provide uniqueness-- that is all

    gaps should not matter, and if they do, you are using the numbers incorrectly
    rudy.ca | @rudydotca
    Buy my SitePoint book: Simply SQL

  3. #3
    Join Date
    Mar 2004
    Posts
    5

    not really....

    a suggestion.

    I have read several posts in reply to people wanting to reorder their tables and use up the deleted number. I do not wish to have to export and re-insert the data, THEN, update 20 other tables that have the old ID in it.

    I am just looking for other suggestions. I am well aware how AUTO INCREMENT behaves and that its nicelly inserting a unique ID for each insert.

    If an ID can be based on date and time, that will never have to be worried about down the road.

    Has anyone used any other methods then INT AUTO INCREMENT, or DATETIME?

  4. #4
    Join Date
    Aug 2004
    Location
    France
    Posts
    754
    Hello,

    I've just read the MySQL manual part about AUTO_INCREMENT, and according to user comments, the behaviour of such fields can change from one version of MySQL to another. Consequently, I would really avoid the use of these.

    Moreover, I don't know how such a field behaves when reaching its maximum value (depending on its datatype). Maybe r937 can give advice on that point ?

    Another common way, and the one used when you build a nice database design, is finding the list of columns that uniquely identifies each row in your table, and make this combination of columns the primary key. In your case, maybe adding one or two columns to the DATETIME one could do it. I think that you could also look for documentation on "Database design" : that may help you understand more about good database designing, and among other things, about how to choose primary keys.

    Regards,

    RBARAER

  5. #5
    Join Date
    Apr 2002
    Location
    Toronto, Canada
    Posts
    20,002
    when an auto_increment hits its datatype limit, you cannot add additional rows

    easy to test for yourself with tinyint)

    so just make sure you define a datatype big enough

    integer goes up to 2 billion something

    guess how many rows you can have with bigint

    here's a gedankenexperiment -- assume your application adds 1000 rows per second, how long before you hit the limit on a bigint auto_increment?

    daminate, for other options, do a google for surrogate versus natural key
    rudy.ca | @rudydotca
    Buy my SitePoint book: Simply SQL

  6. #6
    Join Date
    Aug 2004
    Location
    France
    Posts
    754
    Well, with Bigint, if I'm right, with 1000 inserts / second, we could go on inserting data for 292471208 YEARS. Not bad...

    Thanks for your answer r937.

    Regards,

    RBARAER

  7. #7
    Join Date
    Apr 2002
    Location
    Toronto, Canada
    Posts
    20,002
    yes, 1000 rows/second for a quarter of a billion years

    want to bet you run out of disk space before you run out of numbers?

    rudy.ca | @rudydotca
    Buy my SitePoint book: Simply SQL

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •