Results 1 to 4 of 4
  1. #1
    Join Date
    Jun 2004
    Posts
    115

    Unanswered: concept of segment and dbcc in sybase

    Hi

    I am new to syabse and i am having a tough time understanding what is the meaning of segment and why is it required.

    And also why do we need to run this dbcc commands periodically on a database i have never heard this kind of thing done on other databases

    And what kind of performance impact do you have running dbcc commands on a live production database
    Last edited by db2hrishy; 09-14-09 at 01:23.

  2. #2
    Join Date
    Mar 2009
    Location
    Sydney, Australia
    Posts
    258
    Basically Sybase is a "enterprise" level database, which allows thousands of active users connected to a single Sybase Server (hosted on a mid-range "enterprise" operating system). It has many features other dbms do not have. Like any other dbms, you only get the performance out of the performance features, if you understand them and use them; that means there is an implementation and ongoing admin labour requirement.

    1 Segment
    First we have to assume that you have the Devices (which are SAN LUNs or /ufs files) setup resonably well. Then (for high performance systems), you would want to spread the I/O load across these Devices; otherwise you have all tables interspersed with each other (heavily fragmented) on one dirty great big "default" segment, with one insert "hotspot". Segments allow you to identify specific Devices, or a group of Devices; and allow you to place tables and partitions of tables on specific Devices. It is a logical view of the physical Devices. Segments allow you to spread the I/O load and substantially reduce fragmentation. Eg:
    place tables which are addressed in a transaction, onto separate Devices
    place the data/clustered index on one Device, and the nonclustered indices on other devices
    place each partition (of partitioned tables) on separate Devices.

    2 DBCC
    Other database systems may not provide the level of recoverability from system of disk failures. In a high throughput database, it is not uncommon for a disk write to fail; what would you like to do when that happens ? We do not really want to bring the db down, and restore from a backup, do we. Sybase has the capability to FIX such pages, that should have been written, but were not written. Second, in order to find and identify such pages, DBCC has to be run at regular intervals; for some shops that means daily; for others it means a rolling group of tables daily, where all tables get checked say twice a week.

    3 DBCC Performance
    Read up on the subject in the manuals. Over the years, in every vesrion, DBCC has been more and more enhanced. It is also highly configurable; the window can be reduced with thoughful setup; you can execute in parallel when the system is least used (eg. 02:00 in the morning). If you do not configure anything, then there will be a small additional load on the system, as it has to read all the pages in the db into memory and check them. With a good configuration, one that is tuned for the site/usage, the overhead is much smaller. You can also exec DBCC on the Test or Replicate system; identify any fixes required; and run just the FIX on the Production system.

    There are two more nightly/weekly maintenance requirements:

    4 Update Stats
    Pretty much the same as DBCC, but this updates the statistics of the columns in the db; which the Optimiser uses to decide what kind of plan it will use for a query. If you want your queries to run at the fastest speeds, you need to update stats regularly. You can run this on another server, and port the updated stats back into the production server.

    5 Reorg
    APL tables do not need this, they are self-trimming and low-maintenance. DPL/DRL tables, while they may be a bit faster for inserts/deletes, create a lot of waste space and forwarded rows (much, much more fragmentation than APL). This slows all queries on the tables down, and thus Reorg has to be executed frequently; weekly is minimum for high insert/delete environments. You cannot run this offline, it must be run on the production db; and yes, it acquires locks for the duration, and therefore interferes with production use.

    (For high-performance, high-uptime shops, we stay away from such abortions; we can prevent such mass fragmentation altogether, and the demanded Reorg, simply by avoiding DPL/DRL tables).
    Regards
    Derek Asirvadem (Formerly DerekA)
    Information Architect / Senior Sybase DBA
    Copyright 2009 Software Gems Pty Ltd

    I answer questions from the Original Poster only. If you have a genuine question, as the moderators have requested, start a new thread.

    http://www.softwaregems.com.au

  3. #3
    Join Date
    Jun 2004
    Posts
    115
    Hi

    Sorry i ddi not understand the meaning of APL DPL and DRL

    regards

  4. #4
    Join Date
    Mar 2009
    Location
    Sydney, Australia
    Posts
    258
    In Sybase, there are three Lock Schemes which you can choose for each table:
    All Pages Locked
    This is the oldest and the fastest, but it requires the table to be truly Relational. If you are used to MS, all MS tables are APL. Locks are acquired on both Index and Data pages, hence "all pages". If the table is designed badly (not Relational; Identity or bad Clustered Index), there is contention at the index level (and if designed well, there is no contention). They are self-trimming, there is no Reorg required.

    Data Page Locked/Data Row Locked
    These tables detach the Index from the Data Heap. That one point of contention on the Index pages is removed, so the locks used are not on "all pages", they are only on "data pages" or "data rows". They have made Deletes "faster", by not deleting the rows, by marking them for deletion only. Therefore you need to Reorg the tables (offline). They have made Updates "faster" by not shuffling the rows on the page when the rows updated no longer fit on the page; they just write a new overflow page. That causes a new order of fragmentation. Updates and Deletes on these may be faster than APL in the online moment, but actually they are much slower due to the large fragmentation (the whole table suffers) and they require admin work (offline) to de-fragment them.

    APL tables, done with reasonable expertise, remain faster. And they do not require "Reorg" or the like.

    All three Lock Scheme tables need their Index dropped/recreated occasionally, to reduce fragmentation at the Index level, that is different to Reorg.
    Last edited by Derek Asirvadem; 09-14-09 at 07:21.
    Regards
    Derek Asirvadem (Formerly DerekA)
    Information Architect / Senior Sybase DBA
    Copyright 2009 Software Gems Pty Ltd

    I answer questions from the Original Poster only. If you have a genuine question, as the moderators have requested, start a new thread.

    http://www.softwaregems.com.au

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •