Page 1 of 2 12 LastLast
Results 1 to 15 of 16
  1. #1
    Join Date
    Nov 2003
    Posts
    6

    Unanswered: Oracle Query Performance, Indexes, and Optimization

    I've got an issue that I'm sure the posters here can answer regarding Oracle databases.

    I'm a SQL Server guru, which is all we've got in-house. Works great for our small and mid-sized apps.

    For our accounting system, we've outsourced in an ASP model. The provider is using Oracle 9 (I believe) for the back end. We're seeing a certain job (which consists of numerous SQL queries) which takes 45 minutes to run. IMO, it doesn't seem like it is the type of process that should take as long as it does. However, the ASP says they do not have any tools to diagnose which of the queries are poorly performing, and can't figure out if the application would be enhanced by additional indexes, etc. I find that very hard to believe. In SQL Server, there are numerous tools to perform those tasks, most of which are extremely simple to use (i.e., Profiler, Idera's tools, Quest's tools, etc).

    Do these types of tools exist for Oracle? Do they come with base Oracle database administration tools? Query analysis, index analysis, poorly performing queries, table scans, etc.

    I need some information to be able to battle their DBA's, of which I'm doubting their sincerity as well as ability. I can't believe there aren't any tools out there that can help an Oracle DBA with their performance analysis.

    Thanks in advance for your help.

  2. #2
    Join Date
    Dec 2003
    Posts
    1,074
    Since you're using ASP as the interface to the data, you might have a bit of trouble as a result of connection pooling. If you can determine where a connection starts, & where one ends, though, then you can enable/disable a trace of the session with exterme precision.

    Once you have this tracefile, you can search it over for problematic SQL. Oracle offers tkprof as an 'interpreter' of this tracefile, although there are some hardcore performance experts who can read it directly for info.

    We've purchased the Hotsos Profiler, which blows tkprof out of the water when you're trying to find the time-consuming portion of a complex, multi-SQL batch process.

    Oracle 10g (OEM) does have some Tuning features which give you generic advice on when to use indexes and such, but you always have to temper that advice with experience and know-how. So I don't think that you're missing out on that piece in 9i.

    BTW, people build their careers around finding trouble spots in batch jobs run against Oracle. While it's not always easy, your vendor is not being straight-forward about what's possible in terms of performance tuning.

    -Chuck

  3. #3
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    I am not sure if there is a tool to catch real-time query data in Oracle. You can set an init parameter (sql_trace) to generate trace files, but that requires a restart of the database, which is not always possible. As an alternative, there is the Top SQL tool in Oracle Enterprise Manager, which will show you the top Queries by I/O, Executions, or I/O per Execution. Enterprise Manager will also show you what long operations are currently running in the sessions view (very similar to SQL Enterprise Manager Current Activity). Like SQL Server, you can drill into the current SQL being run, or the last SQL that was run in case it is an idle connection.

  4. #4
    Join Date
    Aug 2003
    Location
    Where the Surf Meets the Turf @Del Mar, CA
    Posts
    7,776
    Provided Answers: 1
    >You can set an init parameter (sql_trace) to generate trace files, but that requires a restart of the database
    You can write a LOGON trigger to enable SQL_TRACE without having to bounce the DB & with some added logic it can be (very) selective WRT which sessions are traced.


    HTH & YMMV!
    You can lead some folks to knowledge, but you can not make them think.
    The average person thinks he's above average!
    For most folks, they don't know, what they don't know.
    Good judgement comes from experience. Experience comes from bad judgement.

  5. #5
    Join Date
    Nov 2006
    Location
    Indianapolis
    Posts
    90
    There are multiple ways to set trace in a session, depending on database version. Up through 9i it they weren't spelled out clearly by Oracle.

    There are lots of options for looking at sql performance - in Oracle though it takes some DBA skill to diagnose.

    Your ASP just doesn't want to commit resources to it...

    Chuck - Hotsos Rocks! I just can't afford them...

  6. #6
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    Quote Originally Posted by anacedent
    You can write a LOGON trigger to enable SQL_TRACE without having to bounce the DB
    Just read up on the SQL_TRACE parameter, after seeing your comment. Dang that fine print. Now I gotta go experimenting. Thanks, anacedent.

  7. #7
    Join Date
    Nov 2003
    Posts
    6
    Thanks for the insight.

    In this case, it is a job that occurs during off hours, so doing the trace in a dev environment is an option, and we wouldn't have to worry about determining which queries are ours.

    We had a conference call today with the ASP's DBAs, and they did nothing to resolve the situation. It's been escalated to the VP level in their organization now.

    Again, thanks for the information.

  8. #8
    Join Date
    Mar 2002
    Location
    Reading, UK
    Posts
    1,137
    With re. doing the tuning on your dev environment, this may or may not work as the Oracle optimizer takes into account your cpu speed, IO speed, data volumes and distribution. If any of these are different between your dev and prod environments you can get radically different results.

    If I were tuning your system I would do the following
    1) Get a Tom Kyte book, they provide a good intro into tuning Oracle.
    2) Check your schema is analyzed. If your tables/indexes arent analyzed Oracle wont have a clue as to the best way to do the queries.
    3) Use statspack (or look at v$sql directly) to get an idea of which queries are causing the problems. Or you could go Anacedents route of using sql trace though you have to be a bit careful as it can have a noticable impact on performance in some cases.
    4) Once you have found the problem queries try tuning them. The most common problem with untuned databases is full table scans. Adding indexes usually helps in these cases. The key thing with any Oracle tuning is that there are NO golden rules. Try out different options (different indexes, rewritting the sql, replacing cursors with a single sql statement etc) and see which works best.

    And finally just because you have tuned your database it wont necessarily stay that way so be prepared to monitor it.

    Alan

  9. #9
    Join Date
    Aug 2003
    Location
    Where the Surf Meets the Turf @Del Mar, CA
    Posts
    7,776
    Provided Answers: 1
    AlanP,
    No offense is intended & I hope none taken.
    I agree with some of your statements & disagree with others.

    >If any of these are different between your dev and prod environments you can get radically different results.
    I contend that if both DB have current statistics that the same SQL will be the longest running in both places.

    >1) Get a Tom Kyte book, they provide a good intro into tuning Oracle.
    Nice long term payback, but little in the short term.
    >2) Check your schema is analyzed.
    MOST importatnt in my book.
    >3) Use statspack (or look at v$sql directly)
    Waste of time, IMO.
    >4) Once you have found the problem queries try tuning them.
    Start with longest elapsed time & work down the elapsed time list.
    You can lead some folks to knowledge, but you can not make them think.
    The average person thinks he's above average!
    For most folks, they don't know, what they don't know.
    Good judgement comes from experience. Experience comes from bad judgement.

  10. #10
    Join Date
    Mar 2002
    Location
    Reading, UK
    Posts
    1,137
    Hi Anacedent

    No offense taken.

    There are two schools of thought, some DBA go for the sql trace (and wait analysis) way while others go for the v$sql based route (statspack). I am more in the second camp mainly because of the environment I work in currently where the prod databases are under heavy load and we cant put tracing on all sessions without bringing our database to a grinding halt (also we have connection pooling so putting a trace on one session doesnt really give you an accurate picture).

    v$sql (and those tools built on it like statspack) gives you a very quick and easy way of seeing which queries are putting heavy load in terms of block gets/physical reads. And you can do it after a problem is highlighted. I am not saying its perfect (it wont capture all sql, if your sqlarea is small sql will age out regularly etc) but it is a quick, easy and unobtrusive. And when used with v$sql_plan (except on 9i where it can crash your database ) it gives you a lot of the info you get from trace files.

    I do use tracing aswell but it is usually to identify CBO problems with a particularly hard to tune query i.e. why it choose one execution plan instead of another.

    Alan

  11. #11
    Join Date
    Nov 2006
    Location
    Indianapolis
    Posts
    90
    Looking at V$ is like going to the doctor and getting a physical; important - yes. Identify some trends and major problems, establish some history - all good.

    But when you need something specific like knee surgery, nothing beats properly scoped trace files...

  12. #12
    Join Date
    Dec 2003
    Location
    Varna
    Posts
    20
    Hi there !
    I just happens to be in an query optimisation phase, so I'll add my fresh observations.

    The commented here approach ( going SQL by SQL and fixing them ) works well when there is general oversight of basic principles. Then one reads some documentations ("Get a Tom Kyte book" ... ) and makes fixes.

    But when there is an operational sysytem - on every complain about speed - the first thing done is - fix the SQLs. In this situation if performance is still unsatisfactory, one have to switch from SQL analysis to PROCESS analysis.

    So it may be more beneficial to check for redundant data flows or redundant calculations !

    All The Best
    Prob Solver

  13. #13
    Join Date
    Mar 2002
    Location
    Reading, UK
    Posts
    1,137
    Well if are doing proper performance analysis you do look at the network, OS, application and the database. Unless you can cover all of these you are only fixing part of the problem. This is why it is very important to have sys admins, network engineers, DBAs and developers working in close proximity to each other. Unfortunately this rarely happens in most companies.

    Alan

  14. #14
    Join Date
    Nov 2006
    Location
    Indianapolis
    Posts
    90
    ...working in close proximity to each other. Unfortunately this rarely happens in most companies.
    I couldn't agree more. Too bad the poster's ASP is taking that to an extreme...

  15. #15
    Join Date
    Oct 2008
    Posts
    4

    Embarcadero DBOptimizer

    There is a performance monitoring tool from Embarcadero called DBOptimizer.

    Form their site I found: Embarcadero DB Optimizer maximizes database and application performance by enabling DBAs and developers to quickly discover, diagnose, and optimize poor-performing SQL. DB Optimizer eliminates performance bottlenecks by identifying data intensive or frequently executed queries, focusing on specific SQL statements through query statistics (CPU, I/O, wait times), and fine-tuning problematic statements.

    This is where you can find more: Database Software for SQL Optimization | DB Optimizer

    Catalin

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •