11-16-09, 19:31 #1Registered User
- Join Date
- Feb 2003
Unanswered: SQLite: a way to increase performance on inserts?
Is there a way to increase performance on inserts in the SQLite database? ( With reads the performance is very fast.) I have a java service that inserts thousands of rows and I get less than 10 rows per second because (I assume) that each insert has to wait until the last one finishes before it starts. Is there a way to configure SQLLite to queue up inserts and run them in batches without doing it programatically in my Java service? When I run my java service on MySQL, PostGres, or SQL Server, it is blazing fast (500-4000 rps) but with SQLite I am lucky to get 6rps. Those heavier weight dbs are doing something that SQLite, Hypersonic, and Firebird do not do. I don't know a lot about transaction levels/isolation and so I am hoping someone here understands it?
11-26-09, 11:41 #2Registered User
- Join Date
- Oct 2002
- Baghdad, Iraq
From this page:
Because it does not have a central server to coordinate access, SQLite must close and reopen the database file, and thus invalidate its cache, for each transaction. In this test, each SQL statement is a separate transaction so the database file must be opened and closed and the cache must be flushed 1000 times. In spite of this, the asynchronous version of SQLite is still nearly as fast as MySQL. Notice how much slower the synchronous version is, however. SQLite calls fsync() after each synchronous transaction to make sure that all data is safely on the disk surface before continuing. For most of the 13 seconds in the synchronous test, SQLite was sitting idle waiting on disk I/O to complete.
When all the INSERTs are put in a transaction, SQLite no longer has to close and reopen the database or invalidate its cache between each statement. It also does not have to do any fsync()s until the very end. When unshackled in this way, SQLite is much faster than either PostgreSQL and MySQL.
- Disable the fsync via PRAGMA or by recompiling.
- Bundle all the INSERTs in a transaction.
I'd recommend #2. It's almost as fast, but without the danger of a corrupted file in case IO doesn't complete. It's also far more portable to another DBMS.