Results 1 to 5 of 5
  1. #1
    Join Date
    Jul 2010
    Posts
    3

    Unanswered: Need help on how to use Perl DBI script to poll logs from oracle databases?

    Hi,

    I have 2 separate servers each with oracle database installed.I am trying to pull oracle logs from these 2 databases to my main linux server using perl DBI. On this linux server I have 1 dbipoll.pl script and 2 wrapper scripts that pass in parameters to dbipoll.pl.

    Here is the sample wrapper script that I've used to call dbipoll.pl(separate script with different parameters for each databases used):

    Code:
    #!/bin/sh
    #
    export ORACLE_HOME=/ora-main/app/oracle/product/11.1.0.6/db
    export ORACLE_SID=rmdev1
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME
    #
    /usr/bin/dbipoll.pl -dbtype=oracle -server=servername1 -dbport=1521 -db=development -user=myuser -password=mypassword -table=dba_audit_trail -columns="OS_PROCESS,OS_USERNAME,USERNAME,USERHOST,ACTION,ACTION_NAME,TRANSACTIONID,SCN,SQL_BIND,SQL_TEXT,TO_CHAR(EXTENDED_TIMESTAMP,'MM/DD/YYYY HH24:MI:SS'),EXTENDED_TIMESTAMP" -countfile="/app/oracle/product/monitor/countfile1" -countkey="EXTENDED_TIMESTAMP" -o="/opt/oracledb1.log"
    As you can see,the script should retrieve logs and write to oracledb1.log and update the countfile1 with the latest timestamp.The wrapper scripts are scheduled to run at 5mins interval.

    The first wrapper script,which I've named as call_dbipoll1.sh runs fine. I am able to retrieve events and both oracledb1.log and countfile1 are successfully updated.

    However,the second wrapper script,call_dbipoll2.sh is successful only for the first run. For subsequent runs,I've noticed that countfile2 does not get updated with the latest timestamp(ie. the file returns to blank) and oracledb2.log returns to blank as well. Therefore,am unable to pull new events then. Unable to determine what went wrong here..

    Here is sample of my second wrapper script that tries to retrieve events from the 2nd database:

    Code:
    #!/bin/sh
    #
    export ORACLE_HOME=/ora-main/app/oracle/product/11.1.0.6/db
    export ORACLE_SID=rmdev2
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME
    #
    /usr/bin/dbipoll.pl -dbtype=oracle -server=servername2 -dbport=1521 -db=development -user=myuser -password=mypassword -table=dba_audit_trail -columns="OS_PROCESS,OS_USERNAME,USERNAME,USERHOST,ACTION,ACTION_NAME,TRANSACTIONID,SCN,SQL_BIND,SQL_TEXT,TO_CHAR(EXTENDED_TIMESTAMP,'MM/DD/YYYY HH24:MI:SS'),EXTENDED_TIMESTAMP" -countfile="/app/oracle/product/monitor/countfile2" -countkey="EXTENDED_TIMESTAMP" -o="/opt/oracledb2.log"
    Is there anything wrong with it?Anyone can advice on the problem or has any solutions/workarounds to solve it? Thanks in advance.

    Here is the sample of dbipoll.pl:
    Code:
    #!/usr/bin/perl
    #
    #
    # Now accepts arguments for countkey, table, and columns.
    # countkey will ideally be a timestamp field, such as EXTENDED_TIMESTAMP in AUD$.
    #
    # Need to deal with the following variations...
    # 1 - Oracle local
    # 2 - Oracle remote
    # 3 - mysql local
    # 4 - mysql remote
    # 5 - Sybase local
    # 6 - Sybase remote
    # 
    # Oracle connect string:	$dbh = DBI->connect("dbi:Oracle:host=$server;sid=$db;port=$dbport", $user, $passwd);
    # Oracle connect string without env vars: $dbh = DBI->connect( "dbi:Oracle:$db", $username, $passwd )
    # mysql connect string:		$dbh = DBI->connect("dbi:mysql:database=$db\;host=$server;port=$dbport","$user","$passwd");
    # Sybase connect string:	$dbh = DBI->connect("dbi:sybase", $username, $password, $db);
    		
    use IO::Socket;
    use strict;
    use DBI;
    use Getopt::Long;
    
    our ($server,$db,$table,$columns,$user,$enpasswd,$passwd,$dbtype,$tcphost,$tcpport,
    	$countfile,$countkey,$ofile,$dbport,$baseconnect,$dbh);
    
    GetOptions(
    		"dbtype=s"	=>\$dbtype,	# Type of DB (Oracle, MySQL, etc.)
    		"server=s"	=>\$server,	# IP address of database server.
    		"dbport=s"	=>\$dbport,	# Port for connecting to remote DB.
    		"db=s"		=>\$db,		# Database to retriever records from.
    		"user=s"	=>\$user,	# DB username.
    		"password=s"	=>\$passwd,	# Unencrypted password.
    		"xpassword=s"	=>\$enpasswd,	# Encrypted password.
    		"table=s"	=>\$table,	# Table to retrieve from.
    		"columns=s"	=>\$columns,	# Columns to fetch.
    		"tcphost=s"	=>\$tcphost,	# Splunk server IP.
    		"tcpport=s"	=>\$tcpport,	# Splunk tcp listener port.
    		"countfile=s"	=>\$countfile,	# File to store row count status.
    		"countkey=s"	=>\$countkey,	# Field to use as count value
    		"o=s"		=>\$ofile,	# File for output (default is stdout).
    	);
    
    # Deal with encrypted passwords first...
    if (!$passwd) {
    	if ($enpasswd) {
    		$passwd = `echo $enpasswd | openssl bf -d -a -pass file:key`;
    	}
    }
    
    if ($dbtype =~ /oracle/i) {
    	$ENV{'DBI_DRIVER'} = "Oracle";
    	$dbtype = "Oracle";
    	$dbh->{InactiveDestroy} = 1;
    	#$baseconnect = DBI->connect("dbi:$dbtype:$db");
    if (!$dbport) {
    	# Assume a local DB.
    	$dbh = DBI->connect("dbi:$dbtype:$db", "$user", "$passwd") ||
    	  die( $DBI::errstr . "\n" );
    	
    } else {
    	if (!$passwd) {
    		$dbh = DBI->connect("dbi:Oracle:host=$server;sid=$db;port=$dbport", $user) ||
    		  die( $DBI::errstr . "\n" );
    	} else {
    		$dbh = DBI->connect("dbi:Oracle:host=$server;sid=$db;port=$dbport", $user, $passwd) ||
    		  die( $DBI::errstr . "\n" );
    	}
    }
    	
    }
    if ($dbtype =~ /mysql/i) {
    	$ENV{'DBI_DRIVER'} = "mysql";
    	$dbtype = "mysql";
    	#$baseconnect = DBI->connect("dbi:$dbtype:database=$db;host=$server");
    if (!$dbport) {
    	# Assume a local DB.
    	$dbh = DBI->connect("dbi:$dbtype:host=$server;database=$db","$user","$passwd") ||
    	  die( $DBI::errstr . "\n" );
    	#$dbh = $baseconnect . "$user" . "$passwd";
    } else {
    	if (!$passwd) {
    		$dbh = DBI->connect("dbi:$dbtype:database=$db;host=$server;port=$dbport","$user") ||
    		  die( $DBI::errstr . "\n" );
    	} else {
    		$dbh = DBI->connect("dbi:$dbtype:database=$db;host=$server;port=$dbport","$user","$passwd") ||
    		  die( $DBI::errstr . "\n" );
    	}
    }
    
    }
    if (($dbtype =~ /sybase/i) && (!$ENV{'SYBASE'})) {
    	$ENV{'SYBASE'} = '/usr/local/freetds';
    	$ENV{'DSQUERY'} = "$server";
    	$dbtype = "sybase";
    	#$baseconnect = DBI->connect("dbi:$dbtype", "$db");
    if (!$dbport) {
    	# Assume a local DB.
    	$dbh = DBI->connect("dbi:$dbtype:database=$db","$user","$passwd") ||
    	  die( $DBI::errstr . "\n" );
    } else {
    	if (!$passwd) {
    		$dbh = DBI->connect("dbi:$dbtype:database=$db;port=$dbport","$user") ||
    		  die( $DBI::errstr . "\n" );
    	} else {
    		$dbh = DBI->connect("dbi:$dbtype:database=$db;port=$dbport","$user","$passwd") ||
    		  die( $DBI::errstr . "\n" );
    	}
    }
    }
    my $sth;
    if ( -f $countfile) {
    	# Compare vs. file with saved count
    	open (CF, "<$countfile"); 
    	my $filecount = readline CF;
    	chomp $filecount;
    	close (CF);
    		# Adding an ORDER BY clause here may be a requirement
    		$sth = $dbh->prepare("SELECT $columns FROM $table where $countkey > \'$filecount\' order by $countkey");
    } else {
    	$sth = $dbh->prepare("SELECT $columns FROM $table order by $countkey");
    }
    
    our $handle;
    if (($tcphost) && ($tcpport)) {
    # Send directly to splunk server...
        # Create tcp socket to send the data to the splunk server
        my $remote = IO::Socket::INET->new( Proto     => "tcp",
                                            PeerAddr  => $tcphost,
                                            PeerPort  => $tcpport,
                                            Type      => SOCK_STREAM)
                                            or die "cannot connect to tcp daemon on $tcphost";
        $remote->autoflush(1);
    	$handle = $remote;
    	#$target = "\$remote";
    } elsif ($ofile) {
    # Write to local outfile...
        open (OFILE, ">$ofile");
    	$handle = *OFILE;
    	#$target = *OFILE;
    } else {
    # send to stdout
    	open (STDOUT, ">&1");
    	$handle = *STDOUT;
    	#$target = *STDOUT;
    }
    
    # Get some Rows...
    $sth->execute or die $sth->errstr;
    open (CF, ">$countfile");
    while(my $hash_ref = $sth->fetchrow_hashref) {
      my $output = "";
      my $lastrow;
      foreach my $key (keys(%$hash_ref)) {
          my $str = $hash_ref->{$key};
          if ($str =~ /"/) {
    	  (my $xstr = $str) =~ s/"/\\"/g;
    	  $str = "\"$xstr\"";
          } else {
    	  $str = "\"$str\"" if ($str =~ / /);
    	  $str = "\"$str\"" if ($str =~ /,/);
          }
          $output = $output . $key . "=" . $str . ",";
          $lastrow = $hash_ref->{$key} if (lc($key) eq lc($countkey));
          
      }
      chop($output);
       # Update countfile
          # Where did we leave off?...
          *CF->autoflush(1);
          seek(CF,0,0);
          print CF "$lastrow\n";
    
    # And send them somewhere.
    print $handle $output . "\n";
    }
    
    $sth->finish();
    $dbh->disconnect;
    
    close (CF);
    close $handle;

  2. #2
    Join Date
    Feb 2006
    Posts
    173
    Provided Answers: 1
    However,the second wrapper script,call_dbipoll2.sh is successful only for the first run. For subsequent runs,I've noticed that countfile2 does not get updated with the latest timestamp(ie. the file returns to blank) and oracledb2.log returns to blank as well.
    Since it works the first time and the files "/app/oracle/product/monitor/countfile2", "/opt/oracledb2.log" get populated with data that you viewed, then that means the 2nd run is not retrieving any rows via:
    Code:
    $sth = $dbh->prepare("SELECT $columns FROM $table where $countkey > \'$filecount\' order by $countkey");

    and thus these files(
    "/app/oracle/product/monitor/countfile2", "/opt/oracledb2.log") get removed/created empty since the script does not first validate data is retrieved before opening them with ">".

    This is not really a problem but i would recommend changing the file test from
    Code:
    if ( -f $countfile)
    to
    Code:
    if ( -s $countfile)
    Now to see why data is not being returned, After the first run(successful) change the script to print the last value retrieved from $countfile by inserting a print:
    Code:
        chomp $filecount;
        close (CF);
        print "Last entry 1st run: $filecount\n";

    And also printing "$str" after it gets populated via:
    Code:
      foreach my $key (keys(%$hash_ref)) {
        my $str = $hash_ref->{$key};
        print "str value: $str\n";


    You can then see if the values retrieved are in the correct order.

    After the above, You might also want to try it again adding sort to the keys function on the hash:
    Code:
    foreach my $key (sort(keys(%$hash_ref))) {


    hth


  3. #3
    Join Date
    Jul 2010
    Posts
    3
    thanks for the reply.

    I guess it is like what you mentioned that it is not retrieving any rows and thus the files are created empty.

    I realised this by running the script to see what it has captured(the latest timestamp), and at the same time I logged in to the database and see what is the latest row(timestamp) in the table and did a comparison.

    As long as there are new events during subsequent runs,the script is able to pull data. But when it encounters no new events in the database and the script runs,it does not pull anything and thus will create the empty file.After it creates the empty file, even if there are new events later on, the script will fail already.

    Is there anyway else to modify the script a little to resolve this?

  4. #4
    Join Date
    Feb 2006
    Posts
    173
    Provided Answers: 1
    Change:
    Code:
    if ( -f $countfile)
    To:
    Code:
    if ( -s $countfile)
    This will validate the file exists and is populated.

    Then you want to validate you actually retrieved data before opening "/app/oracle/product/monitor/countfile2", Change the code to something like this:
    From:
    Code:
    # Get some Rows...
    $sth->execute or die $sth->errstr;
    open (CF, ">$countfile");
    while(my $hash_ref = $sth->fetchrow_hashref) {
      my $output = "";
      my $lastrow;
      foreach my $key (keys(%$hash_ref)) {

    To:
    Code:
    # Get some Rows...
    $sth->execute or die $sth->errstr;
    my $data_retrieved = "0";
    while(my $hash_ref = $sth->fetchrow_hashref) {
      my $output = "";
      my $lastrow;
      if ( $data_retrieved ne "1" ) {
        open (CF, ">$countfile");
      }
      $data_retrieved = "1";
      foreach my $key (keys(%$hash_ref)) {

    This will only open(i.e. remove/create) the file when data has been retrieved, so, on the first run the file will be populated and on future runs for example, like you mentioned when there are no new events and thus no data retrieved then the file will stay the same until there are new events.

    Hope I got the concept across and this was helpful...



  5. #5
    Join Date
    Jul 2010
    Posts
    3
    Hi,

    Thanks it seems to be working fine.Was indeed helpful.

    Will continue to monitor the logs for any issues..

    much appreciated.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •