Results 1 to 12 of 12
  1. #1
    Join Date
    Sep 2006
    Location
    Czech Republic
    Posts
    82

    Unanswered: Can stored procedure continue if select fails?

    Hello,

    I would like to know if following situation can occur:

    Select in stored proc fails - stored proc don΄t end and code after this is proceeded...

    Example:

    create proc up_some_name
    as

    ... some code before ...

    select ... from ... // this select fails for some reason
    if ( @@error != 0 ) // this is nonsense, if error occurs, code never proceed here
    begin
    ... do something ...
    end

  2. #2
    Join Date
    Sep 2003
    Location
    Switzerland
    Posts
    443
    If the code never proceeds to the next statement, then theres something wrong with the select.

    Check the showplan on it. For eg., if you are doing a table scan on a 100 million row table, yeah you can forget about any control back.

    set noexec on
    go
    set showplan on
    go
    SELECT blah
    go

    But keep in mind... If you are declaring local variables inside the proc, then the optimiser will have different showplans between running it inside a proc and running it as standalone SQL.

  3. #3
    Join Date
    Sep 2002
    Location
    Sydney, Australia
    Posts
    255
    Musil David

    1 Yes, it is completely ordinary that the SELECT "fails" (eg. due to a bug or a badly constructed WHERE clause, you get no rows), and the rest of the sproc continues. Sybase will crash out on certain severity levels, but you have to catch (determine) errors than are not so severe and report them (RAISERROR or PRINT statement). It is called error handling.
    2 You can check @@rowcount as well as @@error. Make sure you save the values of the global parms immediately after the SELECT, before examining them (RTFM)
    3 Refer also http://www.dbforums.com/showthread.php?t=1608722 Post 6.

    Cheers
    Derek Asirvadem
    Senior Sybase DBA/Information Architect derekATsoftwaregemsDOTcomDOTau
    Anything worth doing is worth doing Right The First Time
    Spend your money on standards-compliant development or spend 10 times more fixing it

  4. #4
    Join Date
    Sep 2006
    Location
    Czech Republic
    Posts
    82
    I have read your recommendation about error tests... So I think there is no universal test for crash after any command.

    you wrote:
    SELECT ...FROM ... WHERE ...
    SELECT @err = @@error, @rows = @@rowcount

    but

    @@error is usefull because if select fails stored proc. crash and I never proceed next statement but I can get status level up...

    @@rowcount is redundant - it depends on logic: select sometimes can return 0 and sometimes not... so I should rather test variables

    e.g.
    SELECT @ln_name = name FROM persones ...

    IF ( @ln_name = NULL )
    . . .
    ELSE
    . . .
    END


    But what concern insert/update/delete there testing for @@error has good reason (e.g. testing for duplicate values ) and I should write (without thinking) minimal universal test after each:

    select @ln_err = @@error
    if ( @ln_err != 0 ) goto EXIT_ERR

    Correct?

    David

  5. #5
    Join Date
    Aug 2002
    Location
    Madrid, Spain
    Posts
    97
    Quote Originally Posted by Musil David
    Select in stored proc fails - stored proc don΄t end and code after this is proceeded...
    end
    I know this can occur with a conversion error. It's common with datetime datatype; for instance:

    create proc miproc (@a char(10))
    as select convert(datetime, @a)
    print "FIN"

    If you execute
    miproc "x"
    the select fails due to a conversion error and the stored proc ends there. "FIN" is not printed. Even worse, if the sproc is inside a "batch", the batch is ended, as in

    miproc "x"
    print "AFTER miproc" -- This is not printed
    go

    Most of the errors don't have these nasty effects. I think this problem is caused by ASE handling conversion errors as something similar to syntax errors, that prevent the execution of the whole SQL batch. And the moral is: do whatever is needed to avoid a query inside a stored proc causing a conversion error.

    Regards,
    Mariano

  6. #6
    Join Date
    Sep 2002
    Location
    Sydney, Australia
    Posts
    255

    Minimum Universal Error Checking

    David

    @@error is mandatory, essential, not just useful. As long as you have at least one meaningful PRINT statement under EXIT_ERR, that is ok. Since you have accepted that, let me proceed from there. In this case we are evaluating "minimum practice" as per my experience, but let us keep in mind the concept and virtue of "best practice", without argument.

    @@rowcount is not always necessary. However, if you are applying a standardised error handling structure, and you sometimes require zero rows, sometimes require 1 rows, other times require > 1 row, then you may choose to have the standardised structure capture @@rowcount with the @@error (which you have to do anyway) so that it can then be interrogated (just like @@error) in the code that follows.

    Note that @@error is reset by every statement and @@rowcount is reset by those statements that return rows; thus coders get into difficulty when they capture the two parms separately, and get the sequence wrong; or if they have two IF statements without the capture: In the following, the second IF checks the success of the previous IF, not the success of the INSERT, the status of which is therefore lost.
    Code:
    INSERT tempdb..mytable SELECT ... FROM ... WHERE
    IF ( @@rowcount = 0 ) goto EXIT_ERR
    IF ( @@error != 0 ) goto EXIT_ERR
    For a standard (without thinking) error check, you are better off:
    • capturing both parms in one statement following every verb
    • checking @in_err after every verb, always
    • checking @in_rows after every verb that returns rows (for zero, one or more based on requirement).

    Separately, at the top of the proc, you must check that @@trancount is what it is planned/expected to be, otherwise a proc that works on its own will "sometimes" not work in the context of an open transaction.

    Null Check
    For many reasons, checking for a column value = null is not a sufficient method of identifying if you have [not] received a row in the select. The null check is heavily dependent upon the specific implementation of null handling (good, bad; consistent, inconsistent), the coder's understanding of null handling, and the actual possibility of null in that column being returned. It is therefore not good enough to be implemented as a standard zero-rows-returned check. Sybase provides @@rowcount to explicitly check for that, so it is not reasonable to use a substitute that may or may not work or that works some of the time. If you are explicitly checking @@rowcount then you do not need to check the column value for null, I am not suggesting redundant code: I am advising that you use the prescribed method that works in every instance, over a method that is questionable. For standardised error checking (code that works every time regardless of dependencies [eg. null implementation]) and that others can immediately understand and rely on, we use an explicit @@rowcount check over an implicit null (or other value) check.

    Finally, most of the time, you will be performing DML verbs on db objects, and not loading a local variable, therefore most of the time you will not have a convenient @var loaded to check if it is null (or whatever). Plus the @var may be containing a value from the previous iteration, and not from the verb that failed.

    Definition of Fail/Crash
    There may be differences re our definitions of a select "failing":
    • if you encounter errors of higher Severity Levels (read up on it), ASE will terminate execution (crash) of the proc (and the SELECTv @err = @@error ... never gets executed, so the error checking does not get executed)
    • if you encounter 1205 then ASE automatically rolls back the tran and returns to the code
    but you had better have an error check in it so that you catch that and restart you tran from the beginning xor exit; otherwise you will execute the second half of your tran with the first half missing - depending on your implementation of RI you may or may not get orphans and childless parents
    • if you encounter other errors (lower Severity Levels, not error 1205) that cause the SELECT to truly fail in ASE terms (ie. it returns a non-zero @@error), you need to identify that and exit
    • finally if everything worked as far as ASE was concerned (zero @@error returned), but the SELECT actually returned zero rows and you are expecting one or more rows (eg. because the code "worked" in development, and it has always "worked" but due to bad code or bad data it actually returned no rows in production) you need to know that, rather than proceeding with the execution of the remainder of the proc.

    Thus we have the identified minimum error checking (referred code http://www.dbforums.com/showthread.php?t=1608722 Post 6), it will no doubt be substantially more than an academically chosen or forum-discussed minimum, it is based on experience. This is the "universal test" [implement without thinking] for all code, for any kind of failure, after any verb. (Note this is actually less than our standard, which employs RAISERROR and templates with known labels, etc.) Choose what you like and live with the consequences: we never worry about whether the SELECT failed or why there are no rows in a tempdb table (if we did not get an error report, we know it worked and we know there are no valid rows).

    HTH

    You did not ask me, but your statement "testing for duplicate values" concerns me. If you want sociable code, you should be checking (via SELECT) that the intended INSERT/UPDATE/DELETE is in fact valid, before attempting it, and thus avoid failing on the attempt (and have to rely on checking afterward). This method saves the entire resource commmitment for the invalid attempt (locks, log, updates to data and index pages) that has to be reversed, and allows other sociable users to proceed without delay and without robbing them of the soon-to-be-reversed resources. There are a few people out there coding like they were the only user on the server: the code "works", but causes avoidable contention problems in "production", starts failing under load, and fails more and more often as the load increases.

    Cheers
    Derek Asirvadem
    Senior Sybase DBA/Information Architect derekATsoftwaregemsDOTcomDOTau
    Anything worth doing is worth doing Right The First Time
    Spend your money on standards-compliant development or spend 10 times more fixing it

  7. #7
    Join Date
    Sep 2002
    Location
    Sydney, Australia
    Posts
    255

    There are many types of errors to capture ...

    Quote Originally Posted by Mariano
    ... Most of the errors don't have these nasty effects ...
    These are not "nasty effects", they are documented features. Datatype conversion errors (high Severity Level) will terminate the entire open transaction and discontinue any further execution of the series of nested procs. This should not be a surprise to anyone who considers themselves an SQL coder. These are "hard" errors in SQL, and can be easily avoided by standards that force Type-casting (SQL being a loosely Type-cast language and thus allowing lazy coding). Ensuring that you never have Datatype mismatch problems is an ordinary requirement.

    Likewise divide-by-zero. Likewise arithmetic overflow, but at least these can be constrained with SET ARITHABORT ... and SET ARITHIGNORE ...

    If you cannot take a course, then at least read all about it, that's what the manuals are there for. Test your code with illegal values and conditions (to ensure it fails correctly, instead of populating the db with rubbish, or ignoring errors and carring on merrily), and get your peers to check it, before publishing it.

    If you have nested procs, then you better have standards for that as well. The @@trancount check is the absolute minimum, not the devised standard. Stored procs should always return a status, which must be checked by the caller. In my universe, the following (I am not implying that Mariano recommended this) is not allowed:
    Code:
    CalledProc @var1, @var2
    PRINT "after CalledProc"
    It must be:
    Code:
     
    ... SELECT @proc_name = "CallingProc" ... BEGIN TRAN @proc_name ... EXEC @return_stat = CalledProc @var1, @var2 IF ( @return_stat != 0 )
    BEGIN @msg = "CalledProc failed" GOTO EXIT_ERR END
    ...
    EXIT_ERR:
    IF (@err != 0)
    PRINT "Proc %1! failed with error %2!", @proc_name, @err
    PRINT "Proc %1! failed at %2!", @proc_name, @msg ROLLBACK TRAN @proc_name RETURN (1)
    The point is, you should devise a standard template that handles all these issues, and produces consistent error reports that you can use to identify exactly what failed and exactly where it failed. As stated, my templates are more elaborate and use RAISERROR (that is what it is there for), not simple PRINTs.

    Engineered code is about three times the length of prototype code, in any language. Standard-compliant buildings are about three times the cost of the substandard variety, but they do not crack up when it rains, and the toilets do not back up during a party. In these days of "Write Only" coders, these fundamental principles are ignored. Until disaster strikes.

    Cheers
    Derek Asirvadem
    Senior Sybase DBA/Information Architect derekATsoftwaregemsDOTcomDOTau
    Anything worth doing is worth doing Right The First Time
    Spend your money on standards-compliant development or spend 10 times more fixing it

  8. #8
    Join Date
    Sep 2006
    Location
    Czech Republic
    Posts
    82
    Many thanks for your informative post.

    Can I have still some question? (Maybe I´m bullheaded in this...)

    @@error is mandatory, essential, not just useful. As long as you have at least one meaningful PRINTstatement under EXIT_ERR, that is ok.
    Why should I use PRINT statement? I get Sever Message with Number serverity proc name and line...(In addition some clients that call some of proc don´t accept PRINT statements)

    Finally, most of the time, you will be performing DML verbs on db objects, and not loading a local variable, therefore most of the time you will not have a convenient @var loaded to check if it is null (or whatever). Plus the @var may be containing a value from the previous iteration, and not from the verb that failed.
    I´m still not confident that I should test @@error after SELECT... I tried some examples - missing object ( it crashes but I can test return value level up) zero division (totally crash), convert example by corral http://www.dbforums.com/showpost.php...24&postcount=5 (total crash)... This examples have no sense to test @@error. But the same example with convert from int to datetime goes ahead , so it will be useful to test @@error. But I think then I should test every simple select ( e.g. simple assignment of values) but my code will be awfully long...

    So, conclusion

    For a standard (without thinking) error check for INSERT/UPDATE/DELETE, you are better off:
    • capturing both parms in one statement following every verb
    • checking @in_err after every verb, always
    • checking @in_rows after every verb that returns rows (for zero, one or more based on requirement).

    What concern SELECT you should think about @@error and @@rowcount test according to requirement

    Definition of Fail/Crash
    There may be differences re our definitions of a select "failing":
    • if you encounter errors of higher Severity Levels (read up on it), ASE will terminate execution (crash) of the proc (and the SELECTv @err = @@error ... never gets executed, so the error checking does not get executed)
    I read up about it but I´m not wiser...

    http://manuals.sybase.com/onlinebook...32749;pt=32677

    f.e. the example with convert to datetime (int to datetime, varchar to datetime)... Both severity level 16... But in first case I can catch @@error but in second crash...

    Divide by zero is severity level 10: Status Information (!), but i crash

    If you want sociable code, you should be checking (via SELECT) that the intended INSERT/UPDATE/DELETE is in fact valid, before attempting it, and thus avoid failing on the attempt (and have to rely on checking afterward).

    Thanks for note. I´m interested. Could you add some more to it? I think I couldn´t easy avoid this. This checking could by hardly compared to calculate with checking afterward. ( Inserting few values don´t take many resources, inserting huge amount of rows in transaction... I coudlnt easy imagine how to test it...???)
    Last edited by Musil David; 10-04-06 at 12:12.

  9. #9
    Join Date
    Sep 2006
    Location
    Czech Republic
    Posts
    82
    Code:
    SELECT @proc_name = "CallingProc"
    
    BEGIN TRAN @proc_name
    ...
    EXEC @return_stat = CalledProc @var1, @var2
    IF ( @return_stat != 0 )
      BEGIN
        @msg = "CalledProc failed"
        GOTO EXIT_ERR
      END
    
    ...
    
    EXIT_ERR:
    IF (@err != 0)
      BEGIN
        PRINT "Proc %1! failed with error %2!", @proc_name, @errPRINT "Proc %1! failed at %2!", @proc_name, @msg
        ROLLBACK TRAN @proc_name
        RETURN (1)  
      END
    But you don´t get status of this nasty proc.... It whole crash! (with your calling proc also!)... Never proceed to IF ( @return_stat != 0 )

    Note: And you forget for test befor call:

    Code:
    select @tran_flag = @@trancount
    
    SELECT @proc_name = "CallingProc"      // I think naming procedures in variables worse reading of code...
    
    if @tran_flat = 0
    begin transaction @proc_name
    else
      save transaction @proc_name   // SAVEPOINT
    ...


    otherwise you rollback everything.... or I don´t understand why it is in transaction?
    Last edited by Musil David; 10-04-06 at 12:54.

  10. #10
    Join Date
    Sep 2002
    Location
    Sydney, Australia
    Posts
    255
    David

    This thread is deteriorating ... I have already stated that a public forum is not appropriate for education, which requires interactive conversation, a whiteboard, etc. Please accept my apologies, but I will not be posting further.

    However, for you to gain the intended benefit of what has already been posted, let me provide some pointers to the above posts.
    1 I have stated several times that RAISERROR is correct, PRINT is the minimum. That means you should not use a PRINT statement, you should use a RAISERROR statement. But that needs a bit of set up. Therefore for example code and for throw-away code, PRINT is often used.
    2 You do not need to check a SELECT that loads variables without referencing an object. You do need to every single DML verb (INSERT/UPDATE/DELETE and SELECT that returns rows). Yes, your code will be long, that's why we have [implement without thinking] templates. Other than famous liars like Gates, no one said programming is easy.
    3 You appear to have made tests that cause Sybase to terminate execution of your code (missing object, conversion error). Excellent. I have stated when that happens the remainder of your code will not executed. There are MANY MORE TYPES OF ERRORS, including SELECT that you hope returns rows, but in fact does not. I have stated that these are what @@error and @@rowcount are for, that they are NOT for the types of errors that you have tested for. You should now identify THOSE types of errors and test for them.
    4 You appear to think you can execute the code segments in my posts. The code segments are not executable as is, nor are they intended for execution. Each code segment is an example, only, for the purpose of illustrating the point in question, only. If you add up all the code segments together, you will get a slightly better skeleton, but it is still not executable.
    5 My advice is, Do NOT use savepoints, they further confuse an area (transactions) that is commonly already confused and badly implemented.
    6 You do not understand transactions enough for me to answer your questions (about my EXAMPLE code that prevents common transaction confusion). Sorry, I cannot help you.
    7 Re "verify the attempt before INSERT/UPDATE/DELETE". These are standards that were identified in the 1960s by minds greater than mine. I have simply rendered them in a modern RDBMS context. I am not willing to discuss them, I merely hold them in esteem and use them.
    8 Re "I think naming procedures in variables worse reading of code". If you use a chunk of text (string of fixed chars) more than once, you should put it in a variable, in order to (a) save space and (b) eliminate the consequences of typographical errors. Plus (c) you need it in a variable in order to use it in a PRINT or RAISERROR statement.

    My replies to posts attempt to answer questions of enquirers. They regard Quality, come from standards and templates that we have devised over many years, based on experience, but of course they identify only tiny segments of those standards. I hope you can appreciate that it is unproductive for me to justify these standards to, or to provide education to, someone who needs a basic SQL course, with no experience of either.

    HTH
    Derek Asirvadem
    Senior Sybase DBA/Information Architect derekATsoftwaregemsDOTcomDOTau
    Anything worth doing is worth doing Right The First Time
    Spend your money on standards-compliant development or spend 10 times more fixing it

  11. #11
    Join Date
    Aug 2002
    Location
    Madrid, Spain
    Posts
    97
    Quote Originally Posted by Musil David
    f.e. the example with convert to datetime (int to datetime, varchar to datetime)... Both severity level 16... But in first case I can catch @@error but in second crash...

    Divide by zero is severity level 10: Status Information (!), but i crash
    Please excuse me if I'm wrong, since I'm not checking it with manuals (which I have read, but time ago). I think the severity doesn't control this, and your examples prove it. I have more examples of high severity levels not ending a batch; I'm looking now at a 3474 msg (severity level 18) which didn't end the isql batch. It's rather the stage at which the error is raised. ASE goes through several stages when parsing and running a SQL batch. Conversion errors are evaluated at a very early stage, moreoless at the same level than plain SQL syntax errors. But ASE works at a higher stage when it executes the SQL logic inside the stored proc or inside the isql batch. Then, the stored proc and the batch are both terminated without chance to run any error handling code inside them.

    A different point is a really serious error, at severity level 19 and higher. It's documented that these terminate the client connection to ASE.

    In practical terms, there are two solutions:
    1 - Ensure that that kind of errors will never hapen. In particular, ensure that any data are properly formatted before applications pass them to SQL code.
    2 - Do a final check for errors by non-TSQL code; ie, C code, ksh scripting code, etc. In our center, C programs did always handle errors better than scripts, since our C programs only send one SQL stmt at a time, and then handle its sqlcode (return code is ESQL). The stmt could be a sproc execution.

    A comment about the last. Conversion and syntax errors can be handled by isql, but only in a different batch; example:

    exec failing_sproc
    go
    select @@error
    go

    @@error variable keeps its value across batches inside the same connection (inside the same isql execution, if using the isql utility). The point in this example is the "go" which ends a batch and start a new. @@error must be handled in the first SQL statement of the second batch.

    Regards,
    Mariano

  12. #12
    Join Date
    Aug 2002
    Location
    Madrid, Spain
    Posts
    97
    Quote Originally Posted by corral
    I'm looking now at a 3474 msg (severity level 18) which didn't end the isql batch.
    [snip]
    A different point is a really serious error, at severity level 19 and higher. It's documented that these terminate the client connection to ASE.
    I correct myself. Error 3474 has severity 21, not 18, but isql batch execution continued! This is against the documented feature.

    For those interested, this is the code, and is executed by isql inside a ksh script. The output (below) shows that the error handling code inside the batch did run (line "ERROR en load tran Jul 31 2006")

    set nocount on
    go
    declare @inic char(26), @finc char(14)
    select @inic=convert(char(26),getdate(),9)
    load tran N_copia from "${FICHERO}"
    if @@error = 0 begin
    select @finc=substring(convert(char(26),getdate(),9), 13, 14)
    print "
    LOAD TRAN. Inicio: %1!. Fin: %2!", @inic, @finc
    end else
    print "ERROR en load tran %1!", @inic
    go

    Msg 3474, Level 21, State 1:
    Server 'MCM0501_SY01', Line 3:
    During redo the page timestamp value is less than old timestamp from log. Page #=1454577, object id
    = 1023095181, page timestamp=0011 3c567dba. Log old timestamp=0011 800000003c567dbc. Log record
    marker = (4202270, 4).
    ERROR en load tran Jul 31 2006 10:34:21:996AM


    The batch also continued with errors 701 (sev level 17), 4207 (16), 4221 (16), 4301 (16), 4305 (16), 4306 (16), 624 (21), 7205 (18) and 4233 (19). Some have severity levels above 18.

    Mariano

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •