Results 1 to 10 of 10
  1. #1
    Join Date
    Dec 2003
    Posts
    44

    Unanswered: Large IO and Sybase

    Just a note to ask for some performance tuning recommendations.

    We have a database application that BCPs in millions of rows and then proceeds to generate cascading inserts, updates, and deletes to other tables in the database.

    All of the transactions are bulk transactions: mass inserts, mass updates, and mass deletes (not as much on the deletes however).
    No OLTP will ever take place on this machine.

    We've already done the following from a code perspective and it's worked quite well:

    0. Drop all indexes on tables prior to BCP.
    1. Ensure all updates are DIRECT not deferred.
    2. Leverage SELECT/INTO wherever possible to avoid logging.
    3. Specify indexes directly if Sybase isn't properly choosing them.

    From a Sybase installation perspective, I'm asking how to best tune our server given that it is strictly a large IO processing server. Our biggest incoming BCP's are 7 million rows big.

    We already have the data and log on separate devices, plenty of transaction log space, and a very beefy tempdb.

    The application isn't performing "badly". We just need to ensure from a service level perspective, that it is running as optimally as possible.

    Cheers,

    Isaac

  2. #2
    Join Date
    Jun 2003
    Posts
    140

    Re: Large IO and Sybase

    Hi Issac

    As far as BCP performace is concerned check this out

    (1) you can improve bcp performance dramatically by executing several bcp sessions with a partitioned table

    (2)Network traffic can quickly become a bottleneck when multiple bcp sessions are being executed. If possible, use a local connection to the Adaptive Server to avoid this bottleneck

    (3)Make sure Adaptive Server is configured with enough locks to support multiple bcp sessions

    (4)use fast bcp as much as possible

    (5)male sure "number of preallocated extents" parameter is configured optimaly. (check out manual for the proper value)


    hope this helps
    Cheers and marry Xmas :-)

  3. #3
    Join Date
    Oct 2003
    Posts
    41
    Also makesure you have increased the network packet size to 1024
    ====================
    Rajesh

  4. #4
    Join Date
    Nov 2002
    Location
    Switzerland
    Posts
    524
    ... and configure a buffer pool of 16K

  5. #5
    Join Date
    Dec 2003
    Posts
    44
    Fadace,

    Can you tell me how to configure the 16K pools?

    Please be as precise syntactically as you can.

    Much appreciated,

    Isaac

  6. #6
    Join Date
    Nov 2002
    Location
    Switzerland
    Posts
    524
    http://sybooks.sybase.com/onlinebook...2794;lang=fr#X

    Code:
    -- Create a 10Mb buffer pool of 16K in the default cache 
    sp_poolconfig "default",  "10M", "16K"

  7. #7
    Join Date
    Sep 2003
    Location
    Switzerland
    Posts
    443
    - Striped disks could also improve performance.
    - Also, when doing bcp, specify a different partition number in each thread so that you can be absolutely sure that the rows are hitting different partitions.
    - Dont forget to increase "additional network memory" if you are increasing "network packet size"

    Vishi

  8. #8
    Join Date
    Sep 2003
    Location
    Switzerland
    Posts
    443
    Forgot to mention, if you are using 12.x, then you can also look into using file systems with DSYNC instead of raw partitions. The file system cache can boost performance at the same time assure data commits with the DSYNC option.

  9. #9
    Join Date
    Nov 2002
    Location
    Switzerland
    Posts
    524
    Originally posted by trvishi
    Forgot to mention, if you are using 12.x, then you can also look into using file systems with DSYNC instead of raw partitions. The file system cache can boost performance at the same time assure data commits with the DSYNC option.
    It an boost the performance or not ! Prefer a raw device than a DSYNC file system if - e.g. - your system doesn't allow asynch access in the file systems

  10. #10
    Join Date
    Dec 2003
    Posts
    44

    Re: Large IO and Sybase

    Just a note to say thanks. These have all been very helpful areas for us to look into.

    Cheers,

    Isaac


    Originally posted by ieruiz
    Just a note to ask for some performance tuning recommendations.

    We have a database application that BCPs in millions of rows and then proceeds to generate cascading inserts, updates, and deletes to other tables in the database.

    All of the transactions are bulk transactions: mass inserts, mass updates, and mass deletes (not as much on the deletes however).
    No OLTP will ever take place on this machine.

    We've already done the following from a code perspective and it's worked quite well:

    0. Drop all indexes on tables prior to BCP.
    1. Ensure all updates are DIRECT not deferred.
    2. Leverage SELECT/INTO wherever possible to avoid logging.
    3. Specify indexes directly if Sybase isn't properly choosing them.

    From a Sybase installation perspective, I'm asking how to best tune our server given that it is strictly a large IO processing server. Our biggest incoming BCP's are 7 million rows big.

    We already have the data and log on separate devices, plenty of transaction log space, and a very beefy tempdb.

    The application isn't performing "badly". We just need to ensure from a service level perspective, that it is running as optimally as possible.

    Cheers,

    Isaac

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •