Category: Application Performance Management

Automated JRockit Flight Recording using Perl

No Comments

The program below captures the JVM activity using the JRockit Flight Recorder. It is automated to capture the dump traces every 5 minutes for a set duration. Due to the size of each dump trace, I set it to capture at 5 minute intervals.  For a deep dive analysis of the activity, the Diagnostic Volume of events generated can be changed via the weblogic console to either Low, Medium or High.

This is my first Perl program and I am open for suggestions to improve it.

#!usr/bin/perl -w

use strict;
use Sys::Hostname;
use Time::Local;
use POSIX;

#Hostname && Instance Variables
my $PID;
my $name;
my $dumpId;
my $domain = substr hostname, 0, index(hostname, '.');

#grep the PID list and output to a file
my $ps = qx/ps -ef | grep "weblogic.Name" | awk '{print \$2, \$9}'  > "pid_list.txt"/;
open (FILE, 'pid_list.txt') or die("Unable to open file");
print "\nThe LIST of the WEBLOGIC instances and their PID on $domain:\n\n";

#hashing the file to print the PID's clearly
my %hash; while (<FILE>) {   
         chomp;                ($PID, $name) = split (" ", $_);      
         $hash{$PID} .= exists $hash{$PID} ? "$name" : $name;   
         print "$PID => $hash{$PID}\n"; } 
   close (FILE);  

print "\nEnter the PID:";
$PID=<>;
$PID=substr $PID, 0, index($PID, '\n');
$name = substr $hash{$PID}, 16;  


print "Enter the duration of the recording in SECS:";
my $duration =<>;
# print "The test duration is:$duration\n";
my $finalDuration = join '', (substr $duration, 0, index($duration, '\n')), 's';
# print "Duration:$finalDuration\n";

#Path
my $newWorkDir = qx/echo \$JAVA_HOME/;
print "\nJava Home::$newWorkDir\n";   #chdir $newWorkDir;     my $dir = getcwd();    $newWorkDir = join '/', (substr $newWorkDir, 0, index($newWorkDir, '\n')), 'bin';   # $newWorkDir = join '/', $newWorkDir, 'bin';

chdir $newWorkDir; $dir = getcwd();
print "\nCURRENT WORKING DIRECTORY::$dir\n";

my $wldf = qx{./jrcmd $PID check_flightrecording | awk '/compress=false/ {print \$4}'};
my $length = -2;
$wldf = substr $wldf, 6, $length;
#index($wldf, 'WLDF');
print "$wldf\n\n";

my $time = scalar (localtime(time));
print "CURRENT TIME::$time\n\n";

my $mon = substr $time, 4, index($time, ' ');
my $year = substr $time, -4, 4;
my $mday = substr $time, -16, 2;  my $hour = substr $time, -13, 2;
my $min = substr $time, -10, 2;
my $date = join '' , $mday, $mon, $year;
my $testtime;   

 if ($hour ge 13 && $hour lt 24)

   {      $testtime = join '' , $hour, $min, 'PM';
   }
 else
   {      $testtime = join '' , $hour, $min, 'AM';
   }

my $totdumpTraces = ceil($duration/300);
print "$totdumpTraces DUMP TRACES will be collected\n\n";

my $pFile = "/opt/weblogic/JFRrecordings/$domain-$name-$date-$testtime.jfr";

#Start Flightrecording
my $parentTrace = join '', './jrcmd', ' ', $PID, ' ', 'start_flightrecording', ' ', 'filename=', $pFile, ' ', 'duration=', $finalDuration, ' ', 'compress=true';

print "$parentTrace\n\n";
system("$parentTrace");

for(my $dumpId = 1; $dumpId <= $totdumpTraces; $dumpId++)
{
   my $dFile = "/opt/weblogic/JFRrecordings/$domain-$name-$date-$testtime\_Dump$dumpId.jfr"; 
   #Dump Flight Recording for every 5mins
   my $dumpTrace = join '', './jrcmd', ' ', $PID, ' ', 'dump_flightrecording', ' ', 'name="', $wldf, '"', ' ', 'recording=1', ' ', 'copy_to_file="', $dFile, '"', ' ', 'compress_copy=true';
   #print "$dumpTrace\n\n";
   system("$dumpTrace");
   sleep (300);
}

Methodical Implementation of Application Performance Management

The concept of APM will not yield any value as long as the approach toward the process is flawed. Due to its live monitoring and problem alert notification capability of APM tools, organizations tend to rely heavily on it hoping to triage the problem as soon as it happens. In a complex heterogeneous environment it is an advantage for the organization to be notified about the problem on the fly, but a disadvantage to be notified at an increasing rate. The core problem of APM is the absence of methodical implementation and maintenance of instrumented data in pre/post-production environments. Enabling the dashboards, streaming alerts is certainly overwhelming. To reduce the cost of quality organizations need to proactively monitor, analyze and triage issues in pre-production environment. And to achieve this organizations need to tap into the goldmine of instrumented data and ask themselves:

  1. How is the data being instrumented?
  2. What is being instrumented?
  3. How and where is it being retained? How long should it be retained?
  4. What is the quality of the data?
  5. Is the data being tied and referenced across various tiers in the application environment?
  6. Is the raw data being cleansed, characterized and clustered into a readable form for further analysis?
  7. How do we determine the sanity or correctness of the data? Are there any other tools and  methods to cross-reference the same?
  8. What tools do we apply to further analyze and report the data?

 

    http://www.shunra.com/shunrablog/index.php/2011/05/23/apm-is-broken-or-at-least-it%e2%80%99s-not-delivering-on-its-promise-of-improving-performance