Results 1 to 10 of 10
  1. #1
    Join Date
    Oct 2012
    Posts
    5

    Unanswered: MySql Tuning for Performance

    Hi guys,

    My product has been done in java and shell programs. Java is running on Red Hat Enterprise Linux Server Release 5.4(Tikanga). My back end database is MySql Enterprise server edition 5.1.51. MySql is running on different Red Hat Enterprise Linux Server Release 5.4(Tikanga). I am inserting records into Mysql DB from java machine. MySql takes much time to complete insertion. I need to improve my insertion record rate per second.

    My default storage engine is MYISAM.

    Java is running on 13GB RAM machine.
    MySql is running on 8GB RAM machine.

    Already I tried with changing following parameters in my.cnf file.
    1) key_buffer_size
    2) bulk_insert_buffer_size
    3) query_cache_size
    4) thread_cache_size
    5) read_buffer_size
    and etc. But I have not got any improvement from the above modifications. Please guide me to improve my insertion rate.
    Last edited by althafit; 10-12-12 at 04:15. Reason: mentioning storage engine

  2. #2
    Join Date
    Sep 2009
    Location
    San Sebastian, Spain
    Posts
    880
    I think you should firstly define the number of inserts you need to get done per second. Some of the parameters you have changed will have no impact on the performance for an insert and others will only impact MyISAM tables. To avoid making too many changes, you should first determine what is an acceptable performance per second for the INSERT's. Then you should simulate in the database for instance 100,000 inserts to make sure that they meet the performance criteria. This should be done locally on the MySQL server. If this is not sufficient then you should look closely at the level of work that is being performed during the insert i.e. the number of indexes on the table that are updated with each insert. Also if you are using InnoDB tables then have a look at bundling all the inserts into a transaction as this performs less background updates to the log files which can improve performance.

    If this does match performance criteria that you have defined then look at the running the same inserts remotely to see the impact of the network. If you are transferring large amounts of data then this can be impacted by the network.
    Ronan Cashell
    Certified Oracle DBA/Certified MySQL Expert (DBA & Cluster DBA)
    http://www.it-iss.com
    Follow me on Twitter

  3. #3
    Join Date
    Nov 2004
    Location
    out on a limb
    Posts
    13,692
    Provided Answers: 59
    get some metrics
    find out where the time is 'going'
    at present you don't know which part of the process to optimise/refine
    its could be JAVA
    it could be the query
    it could be your network
    it coudl be your shell process
    it could the loadings on the processor

    until you know where your problem lies then you are just guessing.
    I'd rather be riding on the Tiger 800 or the Norton

  4. #4
    Join Date
    Oct 2012
    Posts
    5
    Thank you Ronan and healdem.

    Roshan,

    My insertion rate per second is 220 records. So it takes much time to insert. I need to increase it to 500 per second. could you provide other parameter names will have the impact on the performance.
    Last edited by althafit; 10-15-12 at 08:46.

  5. #5
    Join Date
    Nov 2004
    Location
    out on a limb
    Posts
    13,692
    Provided Answers: 59
    get some metrics
    find out what is consuming your time
    work on that
    until you know what part of your process is causing problems you are pissing into the wind
    fiddling with parameter names / settings, unless you know where the time is being used, and why its being consumed is at best going to be a hot or miss affair at worst it could bollux the server.

    depending on how that data is being presented you may get a benefit from the bulk insert

    it could be your table design, perhaps too many indexes

    again unless you know where the time is going you don't know how to resolve the issue.
    I'd rather be riding on the Tiger 800 or the Norton

  6. #6
    Join Date
    Oct 2012
    Posts
    5
    thanks healdem for diverting me to think on the different side.

    I have tried with different RAM machines(Java Machine - 2G Ram and 13G Ram). My insertion rate was increased by 150 per second. (From 70 to 220). I have not modified mysql machine with RAM size.

    I will check where timing is getting wasted.
    Last edited by althafit; 10-15-12 at 07:14.

  7. #7
    Join Date
    Sep 2009
    Location
    San Sebastian, Spain
    Posts
    880
    Healdem is right you need to look at this scientifically. You need to understand not only the application but also what is happening in the database during the inserts. For instance are they being inserted by a single process or multiple processes. Is there a sufficient processes available to handle all the incoming requests? You need to understand the database, are there indexes, are there triggers. You need to look at the various engine types for each of the tables to figure out which is the best one for your needs and how to optimize it. You need to look at potentially how the I/O are taking place to see if this can be distributed over multiple disks which parallelizes writes operations.

    My advise is to start with understanding how connections are taking place and making sure that there are sufficient connections from the shared database connection pool to handle all the requests. If you are using Tomcat and have setup 20 connections but your database is configured to accept a maximum of 10 then you will have a mismatch. Make sure that the Linux server is able to handle the maximum connections you are defining through the Linux kernel parameter settings.

    If this is acceptable then have a look at what is happening the during the insert operations. Are there triggers? Are there indexes? Do you need to have the indexes enabled during the insert operations or can they be dropped and then recreated once the inserts have completed? Are other processes reading while your are writing to the database? MyISAM operates table level locking could row level locking be better? There are many things you can do but my advise is to try one thing at a time. Have a set of scripts that performance representative inserts as to what you are experiencing and then once a configuration is changed, rerun the scripts and look at whether this has improved performance or not. Remember this is a science.
    Ronan Cashell
    Certified Oracle DBA/Certified MySQL Expert (DBA & Cluster DBA)
    http://www.it-iss.com
    Follow me on Twitter

  8. #8
    Join Date
    Oct 2012
    Posts
    5
    Yes Ronan. We use Indexing. But we can not remove indices from the table.

  9. #9
    Join Date
    Sep 2009
    Location
    San Sebastian, Spain
    Posts
    880
    Are you also using triggers? How many indexes are there on the table you are inserting to? Have you tried running a test inserting without the indexes to see if this improves the overall performance? Even though you cannot remove them it gives you more information to possibly even redesign the data model. As I said before it is all about changing, testing and measuring
    Ronan Cashell
    Certified Oracle DBA/Certified MySQL Expert (DBA & Cluster DBA)
    http://www.it-iss.com
    Follow me on Twitter

  10. #10
    Join Date
    Oct 2012
    Posts
    5
    We have not used triggers. Totally we have 35 tables. Each table has one index. java will be inserting data to all the tables at a time. I have not tried insertion without indexing. I will check.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •