As mentioned in the introduction, one of the advantages of pegasus is scalability. Pegasus can handle millions of computational tasks for you. In this section, we will learn how to run large scale NAMD simulations with pegasus. First, we will focus on a toy example of running single NAMD job. Then we extend our workflow to run N sequential jobs. Finally, we will see an example of running M parallel jobs where each parallel-job contains N sequential jobs.
Single NAMD Job
The necessary files are available to the user by invoking the tutorial command.
Figure 1. The actual workflow of executing single job is transformed into a set of jobs in pegasus workflow. Such a transformation is useful to keep track of input and output data. In particular, when we have to deal with lot of jobs.
Now we will take a closer look at the files to understand how to create the pegasus workflow for single job.
Input and output files
There are several files and directories under the path "tutorial-pegasus-namd/Single". Some files are required to run NAMD simulations and other files are related to pegasus workflow management. The files required by NAMD are under the directories
The following files are related to pegasus workflow management
The file "pegasusrc" contains the pegasus configuration information. We can simply keep this file in the current working directory without worrying much about the details ( If you would like to know the details, please visit the pegasus home page). The files - dax.xml and sites.xml contain the information about the work flow and data management.
Let us pay attention to few parts of the "submit" script to understand about submitting the workflow. Open the file "submit" and take a look
The purpose of sites-generator script is to generate the sites.xml file. There are several lines declared in the sites-generator script. We need to understand the lines defining the scratch and output directories.
The files "submit.bash" and "sites-generator.bash" will not change very much for a new workflow. We need to edit these two files, when we change the name of the dax-generator and/or the path of outputs, scratch and workflows.
The file - dax.xml contains the workflow information, including the description about the jobs and required input files. We could manually write the dax.xml file but it is not very pleasant for the human eye to deal with the xml format. Here, dax.xml is generated via the python script "dax-generator-singleJob.py". Take a look at the python script, it is self explanatory with lots of comments. If you have difficulty to understand the script, please feel free to send us an email. Here is the brief description about dax-generator python script.
Job submission and status
To submit the job
To check the status of the submitted job
Pegasus creates the following directories
The path of the scratch, workflows and outputs directories are declared in the "submit" scripts at lines 19, 20, 25,26 and 47.
Under the directory "tutorial-pegasus-namd/Exercises/SingleEx1" you will see relevant files to run the single NAMD job with pegasus. However, you need to change few things to run submit the job. The errors are associated with the definition of names of dax-generator and NAMD input files. You have to correct these two file names in the submit submit.bash and in the dax-generator script.
Under the directory "tutorial-pegasus-namd/Exercises/SingleEx2" you will see relevant files. In this exercise, you have to specify the correct path for the scratch, output and workflow directories. All these information are included in the site-generator.bash and submit.bash
The current workflow of N sequential jobs demonstrates the ability to complete large scale molecular dynamics simulations with pegasus. We break the long-time scale simulation into several short-time scale simulations. The short simulations are performed in a sequence and then the results are combined to achieve the the long-time scale simulation. In this example, we will learn how to run N sequential NAMD jobs.
Here, the NAMD jobs are executed one-by-one. In these sequential executions, the restart files are utilized. A NAMD job generates the restart files that are necessary to start the next job. To run N-sequential NAMD jobs, we need N input files. We have to specify in the input file that the restart files from each simulation are available to the next job. So our first step is to generate N input files that are suitable to run N-sequential jobs.
Figure. 2 The workflow to run linear sequence of jobs. The blue circles represent jobs and the arrows represent direction of data flow. This means data from J1 is required to start the job j2, the data from j2 is required to start the job j3 and so on.
Generating N-sequential input files
We use a script to do the task of generating N input files to run the sequential molecular dynamics simulations.
Basically, the script generates input files from a reference template. In these input files, the name of the restart files are specified in correct order that makes the input files suitable for sequential execution of NAMD jobs.
We have to change our dax.xml via the dax-generator script to account that there are N-sequential jobs. Let us see where is our dax generator script.
Let us analyze the dax-generator script to find what are the primary differences in running N-sequential jobs Vs running single job.
We see from the dax-generator script it is easy to take the single job script and build the script for N-sequential jobs. In fact, we could take any dax-generator script and modify it for new workflow. This is because the abstraction layer provided by pegasus is a great strength in re-using the workflow or in modifying a workflow to fit closely related computational tasks. If we don't change the scratch and output directories, there is no need to change the sites-generator.script. Next, we will work on the submit.bash file.
Job submission and status
Since the dax-generator is "dax-generator-namdEq-sequential.py", we should have the file specified in the submit.bash script to generate the dax.xml file. Edit submit.bash as follows
As mentioned before, we can submit and check the status of the job as follows
Go to the directory "tutorial-pegasus-namd/Exercises/NSeqEx3". All the input files related to run 1,000 sequential NAMD jobs are in the file "inputsN1000.tar.gz". The input files were already generated to save time. Uncompress the files by running the command "tar -xvzf inputsN1000.tar.gz" that will create a directory "inputs" containing all the NAMD input files. The dax-generator python script needs to know about these 1,000 input files, include this information in the dax-generator.
M-Parallel, N-Sequential jobs
We consider the case of running large molecular dynamics simulations of a protein in multiple temperature. The solution is to generate M parallel simulations corresponding to different temperatures. Each one of the parallel simulations contains N-sequential jobs. We can take the N-sequential workflow as template and modify the workflow to fit the M-parallel, N-sequential simulations.
Figure. 3 The NAMD simulation is performed for M temperatures - T1, T2, ....TM. For each temperature, there are N-sequential jobs.
Generating M*N input files
We use a script to do the task of generating M*N input files for M-parallel, N-Sequential molecular dynamics simulation.
The "namd_gen_pegasus_input.bash" generates M parallel, each of N sequential NAMD inputs. A given parallel job containing N-sequential jobs will be carried out with a particular temperature that was generated from a random process.
We have to change our dax.xml via the dax-generator script to account that there are N-sequential jobs. Let us see where is our dax generator file.
Let us analyze the dax-generator script to find what are the primary differences in running M-parallel, N-sequential jobs and running N-sequential jobs.
The dax-generator for M-parallel, N-sequential jobs is adapted from the dax-generator script to run N-sequential jobs by adding an additional loop. Since the path of scratch and output directories are defined from the relative path of the working directory, there is no need to change the sites-generator.script. Next, we will work on the submit.bash file.
Job submission and status
Since the dax-generator is "dax-generator-namdEq-sequential.py", we should have the file called in the submit script to generate the dax.xml file.
We submit the jobs and check the status of the job as follows
Go to the directory "tutorial-pegasus-namd/Exercises/MtimesNseqEx4". All the input files related to run 1000 sequential NAMD jobs are in the file "inputsM1000N50.tar.gz". The input files were already generated to save time. Uncompress the files by running the command "tar -xvzf inputsM1000N50.tar.gz" that will create a directory "inputs" containing all the NAMD input files. The input files represent 1000-parallel, each of 50-sequential NAMD jobs. Include this information in the dax-generator.
- Pegasus requires dax.xml, sites.xml and pegasusrc files. These files contain the information about executable, input and output files and the relation between them while executing the jobs.
- It is convenient to generate the xml files via scripts. In our example, dax.xml is generated via python script and sites.xml is generated via bash script.
- To implement a new workflow, edit the existing dax-generator, sites-generator and submit scripts. In the above examples, we modified the workflow for the single NAMD job to implement the workflows of N-sequential and M-parallel, N-sequential jobs.
Pegasus Documentation Pegasus documentation page.
OSG QuickStart. Getting started with the Open Science Grid (OSG).
Condor Manual. Manual for the high throughput condor (HTCondor) software to schedules the jobs on OSG.
For further assistance or questions, please email email@example.com.