Results 1 to 5 of 5
  1. #1
    Join Date
    Oct 2002
    Posts
    10

    Unanswered: mySQL scalability

    Can anyone point me in the direction of some info on mySQL scalability and performance?

    I'm selecting a DB for an app running 5000 transactions per hour, with the biggest table holding 10 million rows.

    I am considering SQL Server too, but am inclined to mySQL (which I know very little about) for both price and o/s.

    My first direct question is: Does mySQL have a partitioning facility, so for this 10million+ table, can I cut it into 10 parts of a million rows each, so that scans on the partioned index only access the ones they need to?

    More generally though, I'm asking if people have experience of using mySQL db's of around this size.

  2. #2
    Join Date
    Sep 2002
    Location
    Montreal, Canada
    Posts
    224
    Provided Answers: 1

    Re: mySQL scalability

    Originally posted by Ae589
    Can anyone point me in the direction of some info on mySQL scalability and performance?

    I'm selecting a DB for an app running 5000 transactions per hour, with the biggest table holding 10 million rows.

    I am considering SQL Server too, but am inclined to mySQL (which I know very little about) for both price and o/s.

    My first direct question is: Does mySQL have a partitioning facility, so for this 10million+ table, can I cut it into 10 parts of a million rows each, so that scans on the partioned index only access the ones they need to?

    More generally though, I'm asking if people have experience of using mySQL db's of around this size.
    Nope but they have MERGE table type... In my case (a 65 million rows table), I had the data partitionned into table_0, Table_1, ... Table_n, table_overflow and created a merge table with all those...

    So key modulo (n + 1) tells you in which table the record is stored... 2 possibilities, the row is in its "right" table or it is in the table_overflow (I use INSERT_LAST see doc for MERGE). Every once in a while (daily), I scan the overflow table and store the rows there into their "right" table...

    Basically, you're letting the app do the partitionning... Works fine.

    If you're comfortable with Linux, I'd say use MySQL on that platform... Better performance... And 5000 transactions an hour ain't much with decent hardware... But are you going to have a mix of UPDATE, INSERT/DELETE/SELECT or mainly SELECTs ?

  3. #3
    Join Date
    Oct 2002
    Posts
    10
    Cheers bstjean - I'll have a look at using the merge table option.

    5000 isn't a great deal, but that's 5000 'customer experiences', so around 10-15 requests, all of which will be select, apart from one insert or update. They won't be primary key selects either, it'll be date range scans.

    Not too worried about that, as you say, decent hardware, and good design/tuning should cover it - I was mainly worried about the performance of this large table in an unknown dbms. Your 65m rows has inspired confidence!

  4. #4
    Join Date
    Oct 2003
    Location
    Denver, Colorado
    Posts
    137
    If you require transaction support, you can't use the MERGE table type to create the Huge table. The MERGE table must be made up of MyISAM tables, which don't support transactions. If you are flexible with the methods you use to insert and update data, then the MyISAM table would be fine (you just need to use the LOCK TABLE statement to ensure your data consistency).

  5. #5
    Join Date
    Sep 2002
    Location
    Montreal, Canada
    Posts
    224
    Provided Answers: 1
    Originally posted by Ae589
    Cheers bstjean - I'll have a look at using the merge table option.

    5000 isn't a great deal, but that's 5000 'customer experiences', so around 10-15 requests, all of which will be select, apart from one insert or update. They won't be primary key selects either, it'll be date range scans.

    Not too worried about that, as you say, decent hardware, and good design/tuning should cover it - I was mainly worried about the performance of this large table in an unknown dbms. Your 65m rows has inspired confidence!
    By the way, the Genome project is using MySQL to store a few billion (yep, you read billion) schemata of the human genome... And Yahoo finance also uses MySQL... And the list goes on and on... Don't worry about this 10 million rows... As long as you have the decent hardware/network/OS and indexing, you're okay... Just like Oracle, DB2, Informix, Sybase, PostgreSQL and others...

    Iif you're worried about performance, install a cheap Windows server with MySQL and generate some test data (you'll find lots of tools and scripts to generate test data) and see how it performs... If you want to make sure you wont run into any major problem in a near future, plan for a few years ahead... Generate 50 million rows and see how th eapp/server handles the load... You'll get a better idea of what you need... Once you're done, adjust your tests and think about the fact that MySQL on Linux will be way faster!!

    Hope this helps...

    P.S. If you need other info on performance, the MySQL mailing list are filled with posts from readers using MySQL under *very heavy* load... They could help... The MySQL community is very friendly and helpful... You'll get answers in minutes! And you can also try the #MySQL IRC channel as well... A few guys hanging there are pretty experience in perf. tuning...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •