Contents
Skip to end of metadata
Go to start of metadata
Table of Contents

Overview

This application example will cover the use of the ROOT data analysis framework on OSG Connect. In this example, we'll use Parrot in order to access CVMFS on any worker node, regardless of whether or not it is natively mounted.

Background

ROOT is a piece of software commonly used in high energy physics. We'll use a sample piece of code (shamelessly stolen from Ilija Vukotic) that prints all of the TTrees and their branches for a given ROOT file.

Testing ROOT on the submit host

For this example, we're going to use ROOT in a manner similar to a typical ATLAS job. The first thing to do is set up our working directory for the tutorial, or simply run 'tutorial root'.

[username@login01 ~]$ mkdir -p root/log; cd root

We'll need to run a few scripts to get the ROOT environment set up properly. This will add ROOT to our PATH and point LD_LIBRARY_PATH at the correct libraries.

file: environment.sh

Let's try running ROOT. We'll use the '-l' flag because we don't want ROOT's splash screen:
[username@login01 root]$ source environment.sh
Setting up gcc
Setting up ROOT
Setting up xRootD
[username@login01 root]$ root -l
*** DISPLAY not set, setting it to 10.150.25.138:0.0
root [0]

There are some complaints about DISPLAY, but that's alright because we don't plan to do anything requiring X11 graphics. You can quit out of root with '.q'

root [0] .q
[username@login01 root]$

Running some ROOT code

We're going to need some ROOT code, as well as a Makefile to compile it. Here is the ROOT code:

file: inspector.C
Here's the Makefile:

 

file: Makefile

 

You can run "make" to create the executable inspector code.
[username@login01 root]$ make
g++ -O2 -Wall -fPIC -pthread -m64 -I/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/root/5.34.18-x86_64-slc6-gcc4.7/include   -c -o inspector.o inspector.C
g++ -O2 -m64 inspector.o -L/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/root/5.34.18-x86_64-slc6-gcc4.7/lib -lCore -lCint -lRIO -lNet -lHist -lGraf -lGraf3d -lGpad -lTree -lRint -lPostscript -lMatrix -lPhysics -lMathCore -lThread -pthread -lm -ldl -rdynamic  -lTreePlayer -o inspector
inspector done

Let's try it out. We're going to be remotely reading data from the a XRootD filesystem. Replace ROOT-FILE below with the location of a ROOT file with event data

[username@login01 root]$ ./inspector ROOT-FILE | head -n10
1
susy:16076:199459108:2505
EF_e20_medium	5071
EF_e22_medium	5080
EF_e22vh_medium1	5391
EF_e45_medium1	6958
EF_mu18	6806
EF_mu18_MG	6881
EF_mu18_MG_medium	7066
EF_mu18_medium	6987

Accessing software anywhere using Parrot

Suchandra has spoken a bit about Parrot for data access. I've written a bit of shell code to do that for you, just like in the previous example ROOT-FILE will need to be replaced with the location of a ROOT file.

file: wrapper.sh

Building an HTCondor Job

Creating a job submit file for this code is pretty straightforward. The wrapper script does the bulk of the heavy lifting, we just have to make sure we are transferring the appropriate files. The requirements line is optional here, but I've included it because I'd like to see my job run on OSG.

file: root.submit
Let's submit the code
[username@login01 root]$ condor_submit root.submit
Submitting job(s).
1 job(s) submitted to cluster 85995.

We can see that it's running:

[username@login01 root]$ condor_q username


-- Submitter: login01.osgconnect.net : <192.170.227.195:42546> : login01.osgconnect.net
 ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 85995.0 username 5/20 13:16 0+00:00:05 R 0 0.0 wrapper.sh

This code puts all of its output on stdout, so let's check the output:

[username@login01 root]$ tail -n10 log/out.85995.0
vx_m	465260
vx_n	16759
vx_nTracks	223060
vx_px	480170
vx_py	480150
vx_pz	481488
vx_sumPt	465725
vx_x	481620
vx_y	444255
vx_z	499665
c-110-34.aglt2.org

Success!


  • No labels