Unanswered: JDBC, Oracle, PreparedStatement, Batch and Querytimeout
I am working on a program(Java) that will connect to some remote oracle databases with JDBC and get some data that will be inserted in a central oracle database. This program will run at night and therefor it must be quite failsafe.
The problem I have some trouble with is when someone has locked the table the program does the inserts to. If this occures an SQLException should be thrown and not the default of locking the program until the lock is unlocked, alot of lock here :-)
I am using a PreparedStatement to create the 10.000s of inserts. If I set the QueryTimeout to let say 20s, and the table is locked, I get a SQLException after around 20sek exactly as I asumed. But sending each of the inserts is slow so therefore I changed it to instead take use of addBatch() and executeBatch() methods. This speeds up the program alot but I don't recive any SQLException when the executeBatch() is called and the table is lock, the program just hangs and wait until I take away the lock.
So I quess my question is can QueryTimeout be used with executeBatch()? I can't find any documentation on this? I don't won't to go back to sending each Insert statement becuase it's painfully slow.
Re: JDBC, Oracle, PreparedStatement, Batch and Querytimeout
If at all possible, create another "work" table which is similar to the one being locked. Now your upload would be really two steps:
1.- get the data into the "central" database, into the "work" table exclusively dedicated to this purpose. The table needs to be cleared before you start uploading.
2.- merge the contents of the "work" table into the proper targer table. This can be as simple as an Insert-Select statement. Since all records are carried in a single statement, you can use the QueryTimeout. Your Oracle should easily cope with 10K records in a single transaction.
I often find this approach allows me to add some logic between steps 1 and 2. In my work table, I tend to have additional columns, with names like "existing" or "is_different". Then, with a few update statements, I can identify duplicate records, i.e. cases when the same record already exists in the target table. The upload gains in robustness if it can be re-executed (even partially), without risk of duplication. All you need is a unique primary key.