Results 1 to 4 of 4
  1. #1
    Join Date
    Apr 2007
    Posts
    130

    Unanswered: experience sharing on databse size needed

    Hello friends,

    Could you share your experience on the limit number of total records per table and per database in real applications.

    How is the MySQL's performance when there are 100 millions of records in a MySQL database. What is the wise way to deal with 100 millions of records.


    Thanks

  2. #2
    Join Date
    Apr 2004
    Location
    Europe->Sweden->Stockholm
    Posts
    71
    It all depends on what you do. Are you going to SUM() numbers over large parts of the data or are you going to retrieve a few rows by primary ID or a set of columns with a good index?

    I have dealt with InnoDB tables with up to 500 million rows. I would say that the retrieval time for a row was roughly the same as with a very small table, but as the tables grow you are likely to want to retrieve more rows. We ran statistics queries and got more and more statistics per day. That meant that the queries had to read more and more rows. However, reading single or a small number of rows by primary key or with the help of a good index is more or less as fast as ever.

    The biggest problems were mostly related to backup, schema changes and other types of maintenance. MySQL's inability to perform "simple" changes such as adding or dropping indicies online might become a problem when a change takes 4 hours (or 2 days).

    While not MySQL's fault, backing up and restoring was terrible. Simply inserting a few billion rows takes a very long time. To make sure that the backup worked, we tried restoring from backup about once a month. This required about a workday of crunching time to write all that data.

  3. #3
    Join Date
    Apr 2007
    Posts
    130

    thanks for sharing

    snorp


    thank you for sharing your valuable experience with me.

    I wonder in your 500 million records case, you hold all the records in a singel table or break the records into serveral parts and hold them in serveral tables.


    Thanks

  4. #4
    Join Date
    Apr 2004
    Location
    Europe->Sweden->Stockholm
    Posts
    71
    The biggest tables had about 500 million rows of statistical data. We did not split big tables but we did purge no longer needed data rather aggressively from some tables in order to save disk space. The biggest problems were not related to the query time required for finding a certain number of records, but rather that we often were looking for a lot of records when we created charts for longer time periods etc.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •