Results 1 to 6 of 6
  1. #1
    Join Date
    Feb 2003
    Posts
    3

    Unanswered: Problems deleting large amounts of records in ASE 12.5

    Hello,

    I have a stored procedure that needs to delete around 4 million records nightly from a table of about 6 million. This table calculates some "projections" and these projections are recalculated nightly, so the old ones are deleted and replaced with new ones. A java program using JConnect facilitates this process.

    The stored proc works fine when it is run outside the java application (in SQL Advantage for example) but when it is executed using a PreparedStatement, it only deletes around 30000 to 60000 records before it seems to quit, though no exception is thrown in the application. From its standpoint, it appears that the stored proc executed successfully.

    I guess I am looking to see if there are some obvious gotchas here that you all may know. The stored proc was written by out DBA, and it loops through all of the projections deleting about 5000 at a time, then calling commit, until it gets through the whole 4 million. He has no idea why it is not working when executed through java code. I can provide more details if required. BTW, the reason for the stored proc in the first place is because when I ran the delete as straight sql, it would quickly fill the transaction logs.

    Any ideas/help are appreciated.

    Thanks, Tom Acree

  2. #2
    Join Date
    Sep 2002
    Location
    Hong Kong
    Posts
    159

    Re: Problems deleting large amounts of records in ASE 12.5

    Are there any return codes fed back to the Java program. I'm wondering if the proc is "exiting normally" after only deleting a few rows.

    How are exceptions generated? Are they on a negative return code or non-zero?

    What happens if the stored proc is called multiple times via JConnect? Does it delete 30000 rows each time or just 30000 the first time and 0 on subsequent calls?

    I'm not a Java or JConnect expert so please excuse any "dumb" questions.

  3. #3
    Join Date
    Mar 2001
    Location
    Lexington, KY
    Posts
    606
    Since you are deleting a vast majority of the rows you are most likely going to encounter:
    outdated statistics
    heavy fragmentation

    It would be better to 'SELECT INTO' the 2 million 'good' rows, drop the old table, and then rename the new table to the old table name. That should save you significant I/O. The downsides are that while you are dropping/re-naming the table requests to that table will be lost, and any stored procedures/triggers referencing the old table will need to be dropped and re-created (if I remember correctly this is the case since they use objIDs to refer to the tables and that ID changes after you drop the table).
    Thanks,

    Matt

  4. #4
    Join Date
    Feb 2003
    Posts
    3

    Re: Problems deleting large amounts of records in ASE 12.5

    Originally posted by richardcrossley
    Are there any return codes fed back to the Java program. I'm wondering if the proc is "exiting normally" after only deleting a few rows.

    How are exceptions generated? Are they on a negative return code or non-zero?

    What happens if the stored proc is called multiple times via JConnect? Does it delete 30000 rows each time or just 30000 the first time and 0 on subsequent calls?

    I'm not a Java or JConnect expert so please excuse any "dumb" questions.
    Thanks for the reply! The stored proc appears to be running and finishing elegantly and is returning 0 as its return code currently. I have asked the DBA who is writing the stored proc to return me the number of rows deleted as well. We normally get an SQLException in java that we have to handle when an error occurs with SQL, but none is being thrown in this case. Again, when run in SQL Advantage, the proc successfully deletes the 4 million records. Also, there is nothing in the logs to indicate a problem here. It is possible (maybe likely) that this is a JConnect problem, and I am barking up the wrong tree.

    As far as running the program multiple times, each time it is deleting 30,000 to 60,000 records. I haven't tested this yet, but I am assuming if we reduce the record count, perhaps it will work correctly (though at that point, I could also execute a DELETE statement as well).

    Here is the stored proc code that is getting executed:

    create procedure PCRDELETE
    as

    declare @rows_affected int
    select @rows_affected=5000
    set rowcount 5000
    while @rows_affected=5000
    begin
    delete RI_PCR_PROJECTION_YEAR_IQ
    where PCR_DATE_RATED_DT = null

    select @rows_affected=@@rowcount
    select @rows_affected, 'rows deleted'
    commit
    end
    set rowcount 0

    Maybe this will help.

    Thanks, Tom

  5. #5
    Join Date
    Feb 2003
    Posts
    3
    Originally posted by MattR
    Since you are deleting a vast majority of the rows you are most likely going to encounter:
    outdated statistics
    heavy fragmentation

    It would be better to 'SELECT INTO' the 2 million 'good' rows, drop the old table, and then rename the new table to the old table name. That should save you significant I/O. The downsides are that while you are dropping/re-naming the table requests to that table will be lost, and any stored procedures/triggers referencing the old table will need to be dropped and re-created (if I remember correctly this is the case since they use objIDs to refer to the tables and that ID changes after you drop the table).
    This is a possibility since this is a nightly process that is executed when there are no users (generally) touching the data. One note is that this as I explained, this table holds projections that are being deleted. What is not being deleted are the projections from previous years that are now actuals. Every year, the number of actuals will increase by about 200,000, so in the somewhat distant future, the actuals will outnumber the projections, and at that time, this table will have over 8 million records. I doubt that that will necessarily preclude this strategy, and I will pass along this approach. Incidentally, in the previous reply I included the SQL for the stored proc if interested.

    Thanks for the help!

    Tom

  6. #6
    Join Date
    Jan 2003
    Posts
    26

    Re: Problems deleting large amounts of records in ASE 12.5

    Originally posted by tacree
    Thanks for the reply! The stored proc appears to be running and finishing elegantly and is returning 0 as its return code currently. I have asked the DBA who is writing the stored proc to return me the number of rows deleted as well. We normally get an SQLException in java that we have to handle when an error occurs with SQL, but none is being thrown in this case. Again, when run in SQL Advantage, the proc successfully deletes the 4 million records. Also, there is nothing in the logs to indicate a problem here. It is possible (maybe likely) that this is a JConnect problem, and I am barking up the wrong tree.

    As far as running the program multiple times, each time it is deleting 30,000 to 60,000 records. I haven't tested this yet, but I am assuming if we reduce the record count, perhaps it will work correctly (though at that point, I could also execute a DELETE statement as well).

    Here is the stored proc code that is getting executed:

    create procedure PCRDELETE
    as

    declare @rows_affected int
    select @rows_affected=5000
    set rowcount 5000
    while @rows_affected=5000
    begin
    delete RI_PCR_PROJECTION_YEAR_IQ
    where PCR_DATE_RATED_DT = null

    select @rows_affected=@@rowcount
    select @rows_affected, 'rows deleted'
    commit
    end
    set rowcount 0

    Maybe this will help.

    Thanks, Tom
    As you said, the problem might be with JConnect. I've seen similar issues with Perl's DBI. Maybe you could ask your DBA to create a stored proc that deletes 5K records and returns. Then you could have a program that establishes a connection to the db, runs the stored proc, and then drops the connection, reconnects and repeats until done.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •