Results 1 to 9 of 9
  1. #1
    Join Date
    Apr 2003
    Posts
    64

    Question Unanswered: When using compress:: to dump via a remote server...

    ... who is doing the compressing? The remote, or the local server?

    We have some old and slow servers running Sparc -- asking them to compress their dumps means waiting forever.

    If we ask them to dump via a remote (fast Opteron-based) server, will they still be doing the compressing, or will they send raw data to the remote and have it do the compression?

    Thanks!

  2. #2
    Join Date
    Aug 2004
    Posts
    38
    You need to be running v12.5.2+ to be able to dump compressed dumps remotely. Yes, the work is done on the remote host, meaning it will take the load off of your prod server.

    HTH

  3. #3
    Join Date
    Apr 2003
    Posts
    64
    I see. Thanks a lot.

    If the remote and the local servers are of different endianness (Sparc vs. Opteron, for example) -- what endianness will go into the dump? That of the compressing, or that of the original server?

    Also, when you say "You need to be running v12.5.2+", is that the requirement for the main, or for the compressing server? We have a FreeBSD machine, for example, on which we can install the old Sybase-11 -- will it be able to compress dumps received from the 12.5.3 servers?

  4. #4
    Join Date
    Aug 2004
    Posts
    38
    It's my understanding that the format of the dump will be from the source server.

    Both servers have to be v12.5.2+ for remote compressed dumps, as Sybase changed the way that compression works in that release. From v12.5.2 onwards the compression was moved into the sybmultibuf processes for the first time. Compression on v11x servers isn't supported by Sybase, although it will (probably) work if you copy the libcompress.so library into the library path from a later server.

  5. #5
    Join Date
    May 2005
    Location
    South Africa
    Posts
    1,365
    Provided Answers: 1
    You can also consider using a fifo special file with rsh to the remote server
    Note: cat command submitted in background. The gzip –dc to uncompress will not do much as the file was compressed with 0 compression, then it is recompressed to max compression.

    Srv1:
    Code:
    $ mkfifo /tmp/sybsystemprocs.fifo
    $ cat /tmp/sybsystemprocs.fifo | rsh srv2 "(gzip –dc | gzip -9c >/tmp/sybsystemprocs.dgz)" &
    [1]     20441
    $ isql 
    1> dump database sybsystemprocs to "compress::0::/tmp/sybsystemprocs.fifo"
    2> go
    srv1 CPU Stats
    Code:
    ps -e -o pid,pcpu,pmem,stime,rss="     Ram" -o vsz="    Swap" -o nlwp,args | sort -k 2,2nr |head
    
      PID %CPU %MEM    STIME      Ram     Swap NLWP COMMAND
    20509  0.5  0.0 08:32:25     2264     2536    1 /apps/syb125/ASE-12_5/bin/sybmultbuf 21 9 12 /apps/syb125/ASE-12_5/install/srv1
    20443  0.2  0.0 08:31:29      592     1768    1 rsh srv2 (gzip -dc|gzip -9c >/tmp/sybsystemprocs.dgz)
    20510  0.1  0.1 08:32:25     3024     3584   11 /apps/syb125/ASE-12_5/bin/sybmultbuf 21 9 12 /apps/syb125/ASE-12_5/install/srv1
    20442  0.1  0.0 08:31:29      672      984    1 cat /tmp/sybsystemprocs.fifo
    Srv2 CPU Stats
    Note: srv2 is an 8 CPU machines 100/8=12.5 i.e. 12.5% CPU is 1 CPU at 100%
    You might want to stripe your dump across the same number of files as the CPUs available for compression.
    Code:
    $ ps -e -o pid,pcpu,pmem,stime,rss="     Ram" -o vsz="    Swap" -o nlwp,args | sort -k 2,2nr |head
    
      PID %CPU %MEM    STIME      Ram     Swap NLWP COMMAND
    24437 12.5  0.0 08:31:28      960     1344    1 gzip -9c
    24438  0.3  0.0 08:31:28      760     1344    1 gzip -dc
    And you don’t need to work through a fifo file for the load unless you didn’t use gzip as your compressor.
    srv2:
    Code:
    1> load database ppd1 from "compress::/tmp/sybsystemprocs.dgz"
    2> go
    Backup Server: 3.42.1.1: LOAD is complete (database ppd1).

  6. #6
    Join Date
    Apr 2003
    Posts
    64
    Interesing... Last time I tried this, Sybase complained, that the fifo was "unsupported file type". Maybe, this has changed with newer Sybase versions... Or, maybe, the "compress::" prefix makes it less picky...

    There is also another trick to it -- the dump device is supposed to be rewindable, because the way, Sybase geniuses wrote things, is Sybase rewinds to the beginning of the file afterwards and modifies a few bytes...

    I guess, dumping with compress (even at level 0) avoids that. Thanks for the idea, I'll try it.

    BTW, why would I need the cat? Why make the source machine go through one extra cycle of memory copying (from cat to rsh)? rsh can do the reading with simple input redirection, can't it?

    Code:
    rsh srv2 "(gzip -dc | gzip -9c >/tmp/sybsystemprocs.dgz)" < /tmp/dump.fifo &
    Last edited by teterin; 06-19-06 at 11:03.

  7. #7
    Join Date
    May 2005
    Location
    South Africa
    Posts
    1,365
    Provided Answers: 1
    Yes, it is only with the compress:: prefix that you can use the fifo special file
    And input redirection instead of cat is more efficient, thanks for pointing this out.

  8. #8
    Join Date
    Jan 2003
    Location
    Geneva, Switzerland
    Posts
    353
    I would add that to do remote compression you have to use the new "with compression" syntax rather than the embedded compress:: directive:
    Code:
    dump database foo to "/the/sybase/dump/file.dump" at REMOTE_BS
    stripe on "/other/sybase/file.dump" at OTHER_BS
    with compression = 1
    Michael

  9. #9
    Join Date
    Apr 2003
    Posts
    64

    Lightbulb Seeking beta-testers

    Ok, I figured out Sybase's (unpublished) API for backup plugins. I can now do the following:

    Code:
    dump database WHATEVER to 'pipe::ssh me@example gzip -9 \> /meow/tmp/WHATEVER.cmp';
    As you can guess, it could be ANY command (interpreted by sh), the data will simply be pumped into its standard input. Instead (or in addition to) compressing, for example, one can encrypt the dump before it ever hits the disk. Whatever...

    Obviously, the dump in the above example can be loaded back using the traditional 'compress' plugin (so you can send it to others), or with

    Code:
    load database WHATEVER from 'pipe::ssh me@example gzcat /meow/tmp/WHATEVER.cmp';
    If you chose other processing methods (like bzip2), you will, of course, have to use my plugin with the UNprocessing command line to load the database...

    I'm very proud of myself, and plan to make loads of money with this library :-) But I need beta-testers, who will all get to keep (but not redistribute, please) the final version of the plugin.

    Anyone interested in the beta of libpipe.so for Linux or Solaris?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •