diff --git a/docs/BoSCOv0.md b/docs/BoSCOv0.md new file mode 100644 index 0000000..d3f1f6a --- /dev/null +++ b/docs/BoSCOv0.md @@ -0,0 +1,406 @@ +%META:TOPICINFO{author="KyleGross" date="1481047997" format="1.1" +version="1.6"}% %META:TOPICPARENT{name="BoSCO"}% \
This is an older release. Please go to +[BOSCO](Trash/CampusGrids.BoSCO) for the latest release. \ + +# BOSCO v0 + + + + +## Introduction + +BOSCO is a job submission manager designed to help researchers manage +large numbers (\~1000s) of job submissions to the different resources +that they can access on a campus (initially a PBS cluster running +Linux). This is the first release of BOSCO, v0.1, if you find any +problems or need help installing or running BoSCO, please email + . + +It offers the following capabilities: + + - Job are automatically resubmitted when they fail. The researcher + does not need to baby sit their jobs + - Job submissions can be throttled to meet batch scheduler settings + (e.g. only 10 jobs running concurrently). The researcher does not + need to make multiple submissions. BOSCO handles that for them. + - BOSCO is designed to be flexible and will in the next version (v1) + allow jobs to be submitted to multiple clusters, perhaps with + different job schedulers (e.g. PBS, LSF, Condor). + +The primary advantage for the researcher is that they only need to learn +one job scheduler environment even if the clusters utilize different +native environments. + +%TWISTY\_OPTS\_DETAILED% +showlink="Click to see the format conventions used in this document" + +Trash/DocumentationTeam.DocConventions +Trash/DocumentationTeam.DocConventions + + +## How to Install + +### Requirements + + - **Submit-node** + This is the system that the researcher uses to submit jobs. In + general it can be the user's laptop, workstation, or it can be + another system that the user logs into for submitting jobs to the + cluster. A current limitation is that the submit-node uses the same + Linux flavor that the PBS cluster is using. + - **Cluster head-node** + This is the node that you normally login to on the PBS or Condor + cluster. + - **PBS flavors supportted** + Torque and PBSPro + - **Condor flavors supported** + Condor 7.6 or following + +### Installation Procedure + +1. Download BOSCO + - BOSCO comes in 3 different flavors depending on your linux + version. To find your Linux distribution and version type `cat + /etc/*-release`. Right click on the appropriate link below + (depending on your linux version), select "save as" and save the + file in your home directory + - [RHEL 5 or Scientific Linux 5 or + Centos 5](ftp://ftp.cs.wisc.edu/condor/bosco/latest/bosco-0.1-x86_64_rhap_5.7.tar.gz) + - [RHEL 6 or Scientific Linux 6 or + Centos 6](ftp://ftp.cs.wisc.edu/condor/bosco/latest/bosco-0.1-x86_64_rhap_6.2.tar.gz) + - [Debian 6](ftp://ftp.cs.wisc.edu/condor/bosco/latest/bosco-0.1-x86_64_deb_6.0.tar.gz) + - Alternatively you can use `wget` and the correct link:\
+
+\# If you have RHEL 5 or Scientific Linux 5 or Centos 5
+ wget
+
+\# If you have RHEL 6 or Scientific Linux 6 or Centos 6
+ wget
+
+\# If you have Debian 6 
+wget
+
+\
+
+1.  Unpack the tar file and Install BOSCO: \
+
+ cd \~
+ mkdir tmp-install
+ cd tmp-install
+ tar xvzf
+\~/bosco-0.1-x86\_64**.tar.gz
+ cd condor-7.**
+ ./bosco\_install \
+
+%TWISTY\_OPTS\_DETAILED%  \
 -bash-3.2$ wget
+
+--2012-02-22 18:35:09--
+
+=\> \`condor-7.7.6-UW\_development-rhel5.8-stripped.tar.gz' Resolving
+ftp.cs.wisc.edu... 128.105.2.28 Connecting to
+ftp.cs.wisc.edu|128.105.2.28|:21... connected. Logging in as anonymous
+... Logged in\! ==\> SYST ... done. ==\> PWD ... done. ==\> TYPE I ...
+done. ==\> CWD /condor/temporary/bosco/2012-02-20 ... done. ==\> SIZE
+condor-7.7.6-UW\_development-rhel5.8-stripped.tar.gz ... 49527344 ==\>
+PASV ... done. ==\> RETR
+condor-7.7.6-UW\_development-rhel5.8-stripped.tar.gz ... done. Length:
+49527344 (47M)
+
+100%\[===========================================================================\>\]
+49,527,344 30.2M/s in 1.6s
+
+2012-02-22 18:35:13 (30.2 MB/s) -
+\`condor-7.7.6-UW\_development-rhel5.8-stripped.tar.gz' saved
+\[49527344\]
+
+\-bash-3.2$ mkdir tmp-bosco/ -bash-3.2$ cd tmp-bosco/ -bash-3.2$ tar xzf
+\~/condor-7.7.6-UW\_development-rhel5.8-stripped.tar.gz -bash-3.2$ cd
+condor-7.7.6-UW\_development-rhel5.8-stripped/ -bash-3.2$ mkdir man
+-bash-3.2$ ./bosco\_install Installing Condor from
+/share/home/marco/tmp-bosco/condor-7.7.6-UW\_development-rhel5.8-stripped
+to /share/home/marco/bosco
+
+Condor has been installed into: /share/home/marco/bosco
+
+Configured condor using these configuration files: global:
+/share/home/marco/bosco/etc/condor\_config local:
+/share/home/marco/bosco/local.gc2-wn2/condor\_config.local
+
+In order for Condor to work properly you must set your CONDOR\_CONFIG
+environment variable to point to your Condor configuration file:
+/share/home/marco/bosco/etc/condor\_config before running Condor
+commands/daemons. Created scripts which can be sourced by users to setup
+their Condor environment variables. These are: sh:
+/share/home/marco/bosco/bosco.sh csh: /share/home/marco/bosco/bosco.csh
+\ 
+
+## How to Use
+
+Now BOSCO is installed. To use it:
+
+1.  Setup the environment
+2.  Add all the desired clusters (at least one)
+3.  Start BOSCO
+4.  Submit a test job
+5.  Submit a real job
+
+### Setup environment before using
+
+Since BOSCO is not installed in the system path. An environment file
+must be sourced all the times you use BOSCO (start/stop/job submission
+or query, anything):
+
+``` screen
+%UCL_PROMPT% source bosco/bosco_setenv
+```
+
+### Add a cluster to BOSCO
+
+To add a new cluster to the resources you will be using through BOSCO:
+
+1.  Setup the environment appropriate for your shell as described in the
+    setup environment section (above).
+2.  For the cluster
+    %RED%mycluster with user
+    and submit host %%
+    (FQDN) and queue manager PBS or Condor. Replace the parts in red:
+    \
+
+ bosco\_cluster --add
+%% \
+%TWISTY\_OPTS\_DETAILED%  \
 -bash-3.2$ bosco\_cluster -add itbv-ce-pbs.uchicago.edu
+Enter password to copy ssh keys to itbv-ce-pbs.uchicago.edu: The
+authenticity of host 'itbv-ce-pbs.uchicago.edu (128.135.158.176)' can't
+be established. RSA key fingerprint is
+8e:a6:db:18:80:6b:b7:de:56:c8:5a:a2:75:19:11:8d. Are you sure you want
+to continue connecting (yes/no)? yes Warning: Permanently added
+'itbv-ce-pbs.uchicago.edu,128.135.158.176' (RSA) to the list of known
+hosts. Installing BOSCO on itbv-ce-pbs.uchicago.edu... Installation
+complete \ 
+
+You can see a list of the current clusters in BOSCO by typing:\
 
+bosco\_cluster --list \
+%TWISTY\_OPTS\_DETAILED%  \
 -bash-3.2$ bosco\_cluster -list
+itbv-ce-pbs.uchicago.edu \
+
+
+### Starting BOSCO BOSCO has some persistent services that must be running. You'll have to start it at the beginning and probably after each reboot of your host.
+
+You should stop BOSCO before an upgrade and possibly before a shutdown
+of your host. If you will not use BOSCO anymore, uninstall will remove
+it from your system.
+
+To start BOSCO:\
+ bosco\_start \
+
+### Submitting a test job
+
+To send a BOSCO test job to the host
+%% (name as listed in
+the output of `bosco_cluster --list`):
+
+1.  Setup the environment appropriate for your shell as described in the
+    setup environment section (above).
+2.  For the cluster %%
+    (identical to output of `bosco_cluster --list`). Replace the parts
+    in red: \
+
+ bosco\_cluster --test
+%% \
+%TWISTY\_OPTS\_DETAILED%  \
  $
+bosco\_cluster -t  
+Testing ssh to ...Passed\! Testing bosco
+submission...Passed\! Checking for submission to remote pbs cluster
+(could take \~30 seconds)...Passed\! Submission files for these jobs are
+in /home/dweitzel/bosco/local.localhocentos56/bosco-test Execution on
+the remote cluster could take a while...Exiting \
+
+
+### Configuring Executable
+
+The executable needs to be configured to take input from directories and
+write output to directories on the remote cluster. No output can be
+transferred back to the submit host automatically.
+
+In the examples below, the input files and executables are assumed to be
+in %GREEN%input\_directory.
+The executable will write all output files and any other output that is
+needed to
+%GREEN%output\_directory. The
+input\_directory and output\_directory could be the same directory, but
+for clarity, they are shown separately below.
+
+A common use case is to create a wrapper script around the actual
+executable. For example, a wrapper could be: \
+\#/bin/bash
+
+\# Change to input directory cd $HOME/input\_directory
+
+\# Run the actual application ./real\_exe
+
+\# Copy the output to the output directory mkdir -p
+$HOME/output\_directory cp output\_file.1 $HOME/output\_directory/ cp
+output\_file.2 $HOME/output\_directory/
+
+\
+
+### Transfer input
+
+In order to do useful work on the remote cluster, you will need to
+transfer all input files and executables. For example, you could use the
+command: \
+ scp -r
+%GREEN%input\_directory
+%% \
+
+The %GREEN%input\_directory
+would include the executables and the input files. The executable will
+be explicitly listed in the condor submit file (immediately below). You
+will configure the executables to write to a known directory. This
+directory will later be manually transferred from the remote cluster
+back to the local machine.
+
+### Example Submission File
+
+Here is an example submission file. Copy it to a file, `example.condor`
+
+``` file
+universe = grid
+grid_resource = pbs %RED%username@mycluster-submit.mydomain%ENDCOLOR%
+executable = %GREEN%input_directory/start.sh%ENDCOLOR%
+output = /dev/null
+error = /dev/null
+log = logfile
++remote_iwd="~/"
+transfer_executable=false
+queue
+```
+
+**NOTE**: output and error are specified as `/dev/null` because there is
+no way to get the output (stdout) and error (stderr) from the executable
+back to the submit host automatically. Getting output back is covered
+after monitoring.
+
+There are only two lines that need to be edited.
+
+  - Grid resource needs to be changed to the %RED%cluster
+    name.
+  - %GREEN%Executable needs to
+    be changed to an executable that is on the cluster. It will need to
+    be prestaged to the cluster.
+
+In the Condor submit file to submit to Condor you'll have to specify the
+resource as: \
 grid\_resource = batch condor
+%% \
+
+### Job Submission
+
+Submit the job file `example.condor` with the `condor_submit` command:
+\
 
+condor\_submit example.condor \
+
+### Job Monitoring
+
+Monitor the job with `condor_q`. For example, the job when idle is:
+\
 
+condor\_q
+
+\-- Submitter: : \<10.148.2.154:44918\> : ID OWNER SUBMITTED RUN\_TIME
+ST PRI SIZE CMD 2.0 dweitzel 3/12 20:52 0+00:00:00 I 0 0.0 start.sh
+
+1 jobs; 0 completed, 0 removed, 1 idle, 0 running, 0 held, 0 suspended
+\
+
+The job could be idle if it is currently idle at the remote cluster.
+When the job is being executed on the remote cluster, the `ST` (State)
+will change to `R`, and the `RUN_TIME` will grow.
+
+Another method of monitoring a job is to check the job's `log`, a human
+readable (for the most part) time line of actions relating to the job.
+The `logfile` was specified in the submission script, for example
+`logfile` in the example above. You can view the log file by using
+`cat`: \
+ cat logfile \
+
+### Transfer output back
+
+Just like transferring the input directory to the cluster, you will need
+to transfer the output back. An example command that could be use
+is:\
 
+scp -r %%
+%GREEN%output\_directory
+\
+
+This will copy the contents of
+%GREEN%output\_directory from
+the remote cluster to the local machine.
+
+## How to Stop
+
+To stop BOSCO:\
+ bosco\_stop \
+
+To uninstall BOSCO:\
+ bosco\_uninstall \
+
+## Command summary
+
+| **Action** | **Arguments** | **Implicit Input** | **Output** | |
+bosco\_install | | | Success/Failure | | source bosco.\[csh,sh\] | | | |
+| bosco\_start | | | Success/Failure | | bosco\_stop | | |
+Success/Failure | | bosco\_uninstall | | | Success/Failure | |
+bosco\_cluster | --add  | | Success/Fail, entry in head
+node table | | | --list | Head-node table | List of added head nodes and
+their status | | | --test Hostname | Submit file | Status of submitted
+jobs | | | --remove Hostname | | Success/Fail, head node table with
+Hostname removed, delete if empty | | condor-\* | Various | Various |
+Various see the [Condor
+manual](http://research.cs.wisc.edu/condor/manual/) | Manually transfer
+output data from batch system.
+
+## Troubleshooting
+
+### Useful Configuration and Log Files
+
+BOSCO underneath is using Condor. You can find all the Condor log files
+in `~/bosco/local.HOSTNAME/log`
+
+### Known Issues
+
+The current version is not supporting file transfer. Not even stdout and
+stderr. If you specify them in the Condor submit file (to something
+different from /dev/null) your job will fail.
+
+## Get Help/Support
+
+To get assistance you can send an email to
+
+
+## References Campus Grids related documents:
+
+  - 
+  - 
+
+Condor documents:
+
+  - Condor manual: 
+
+How to submit Condor jobs:
+
+  - Quick start: 
+  - Tutorial:
+    
+  - Condor manual:
+    
+
+## Comments
+
+
diff --git a/docs/BoSCOv1.md b/docs/BoSCOv1.md
new file mode 100644
index 0000000..a3a811e
--- /dev/null
+++ b/docs/BoSCOv1.md
@@ -0,0 +1,583 @@
+%META:TOPICINFO{author="KyleGross" date="1481047997" format="1.1"
+version="1.29"}% \
This is an older +release. Please go to [BOSCO](Trash/CampusGrids.BoSCO) for the latest +release. \ + +# BOSCO + + + + +\---\# Introduction + +BOSCO is a job submission manager designed to help researchers manage +large numbers (\~1000s) of job submissions to the different resources +that they can access on a campus (initially a PBS cluster running +Linux). This is the first release of BOSCO, v1, if you find any problems +or need help installing or running BoSCO, please email + . + +It offers the following capabilities: + + - Jobs are automatically resubmitted when they fail. The researcher + does not need to babysit their jobs. + - Job submissions can be throttled to meet batch scheduler settings + (e.g. only 10 jobs running concurrently). The researcher does not + need to make multiple submissions. BOSCO handles that for them. + - BOSCO is designed to be flexible and allows jobs to be submitted to + multiple clusters, with different job schedulers (e.g. PBS, LSF, + Condor). + +The primary advantage for the researcher is that they only need to learn +one job scheduler environment even if the clusters utilize different +native environments. + +%TWISTY\_OPTS\_DETAILED% +showlink="Click to see the format conventions used in this document" + +Trash/DocumentationTeam.DocConventions +Trash/DocumentationTeam.DocConventions + + +\---\# Requirements + + - **Submit-node** + This is the system that the researcher uses to submit jobs. In + general it can be the user's laptop, workstation, or it can be + another system that the user logs into for submitting jobs to the + cluster. A current requirement is that the submit-node must use the + same Linux flavor that the PBS/LSF/Condor cluster is using. **There + can not be any Condor collector running on the submit node**, + otherwise it will conflict with the Campus Factory. + - **Cluster head-node** + This is the node that you normally login to on the PBS, LSF or + Condor cluster. + - **PBS flavors supportted** + Torque and PBSPro + - **Condor flavors supported** + Condor 7.6 or later + - **LSF flavors** + no special requirements + - **Cluster** + This is the remote cluster that jobs will execute on. The Cluster + head-node is a node belonging to this cluster. The cluster needs: + - **Shared Filesystem** + The Cluster needs a shared home filesystem + - **Network Access** + The worker nodes need to have access to the submit host. The + worker nodes can be behind a + [NAT](https://en.wikipedia.org/wiki/Network_address_translation) + between the worker nodes and the submit host. + +BOSCO can be used as part of a more complex Condor setup (with flocking +or multiple pools). Whatever is the setup: + + - the BOSCO host needs connectivity to the cluster submit nodes + - the worker nodes (running the jobs, e.g. the nodes in the PBS + cluster) must have network connectivity to the jobs submit node (the + BOSCO host or a different Condor schedd flocking into it) + +\---\# How to Install + +\---\#\# Installation Procedure + +1. Download BOSCO + - BOSCO comes in 3 different flavors depending on your Linux + version. To find your Linux distribution and version type `cat + /etc/*-release`. Right click on the appropriate link below + (depending on your linux version), select "save as" and save the + file in your home directory + - [RHEL 5 or Scientific Linux 5 or Centos 5 + - 64bit](ftp://ftp.cs.wisc.edu/condor/bosco/latest/bosco-1.0-x86_64_rhap_5.tar.gz) + - [RHEL 6 or Scientific Linux 6 or Centos 6 + - 64bit](ftp://ftp.cs.wisc.edu/condor/bosco/latest/bosco-1.0-x86_64_rhap_6.tar.gz) + - [Debian 6 + - 64bit](ftp://ftp.cs.wisc.edu/condor/bosco/latest/bosco-1.0-x86_64_deb_6.tar.gz) + - Alternatively you can use `wget` and the correct link:\
+
+\# If you have RHEL 5 or Scientific Linux 5 or Centos 5
+ cd \~
+ wget
+
+\
+
+1.  Unpack the tar file and Install BOSCO: \
+
+ cd \~
+ mkdir tmp-bosco
+ cd tmp-bosco
+ tar xvzf
+\~/bosco-1.0-x86\_64\_rhap\_5.tar.gz
+ cd condor-7.9\*
+ ./bosco\_install
+ cd \~
+ rm -r tmp-bosco \
+
+%TWISTY\_OPTS\_DETAILED%  \
 -bash-3.2$ mkdir tmp-bosco -bash-3.2$ cd tmp-bosco
+-bash-3.2$ wget
+
+--2012-07-12 14:00:00--
+
+=\> \`bosco-beta-x86\_64\_rhap\_5.7.tar.gz' Resolving ftp.cs.wisc.edu...
+128.105.2.28 Connecting to ftp.cs.wisc.edu|128.105.2.28|:21...
+connected. Logging in as anonymous ... Logged in\! ==\> SYST ... done.
+==\> PWD ... done. ==\> TYPE I ... done. ==\> CWD /condor/bosco/latest
+... done. ==\> SIZE bosco-beta-x86\_64\_rhap\_5.7.tar.gz ... 23975235
+==\> PASV ... done. ==\> RETR bosco-beta-x86\_64\_rhap\_5.7.tar.gz ...
+done. Length: 23975235 (23M)
+
+100%\[================================================================================================================================================================\>\]
+23,975,235 72.0M/s in 0.3s
+
+2012-07-12 14:00:01 (72.0 MB/s) -
+\`bosco-beta-x86\_64\_rhap\_5.7.tar.gz' saved \[23975235\]
+
+\-bash-3.2$ tar xzf bosco-beta-x86\_64\_rhap\_5.7.tar.gz -bash-3.2$ cd
+condor-7.9.1-51830-x86\_64\_rhap\_5.7-stripped/ -bash-3.2$
+./bosco\_install Installing Condor from
+/share/home/marco/tmp-bosco/condor-7.9.1-51830-x86\_64\_rhap\_5.7-stripped
+to /share/home/marco/bosco cp: cannot stat
+\`/home/marco/bosco/etc/condor/config.d/condor\_config.factory': No such
+file or directory
+
+Condor has been installed into: /share/home/marco/bosco
+
+Configured condor using these configuration files: global:
+/share/home/marco/bosco/etc/condor\_config local:
+/share/home/marco/bosco/local.uc3-c001/condor\_config.local Created a
+script you can source to setup your Condor environment variables. This
+command must be run each time you log in or may be placed in your login
+scripts: source /share/home/marco/bosco/bosco\_setenv
+
+\-bash-3.2$ source /share/home/marco/bosco/bosco\_setenv \
+
+
+If the installation fails, specially because of missing dependencies,
+please check that you downloaded the correct version for your Linux
+version.
+
+\---\# How to Use
+
+Now BOSCO is installed. To use it:
+
+1.  Setup the environment
+2.  Add all the desired clusters (at least one)
+3.  Start BOSCO
+4.  Submit a test job
+5.  Submit a real job
+
+\---\#\# Setup environment before using Since BOSCO is not installed in
+the system path. An environment file must be sourced all the times you
+use BOSCO (start/stop/job submission or query, anything):
+
+``` screen
+%UCL_PROMPT% source ~/bosco/bosco_setenv
+```
+
+\---\#\# Starting BOSCO BOSCO has some persistent services that must be
+running. You'll have to start it at the beginning and probably after
+each reboot of your host. You should stop BOSCO before an upgrade and
+possibly before a shutdown of your host. If you will not use BOSCO
+anymore, uninstall will remove it from your system.
+
+To start BOSCO:\
+ bosco\_start \
+
+\---\#\# Add a cluster to BOSCO To add a new cluster to the resources
+you will be using through BOSCO:
+
+1.  Setup the environment appropriate for your shell as described in the
+    setup environment section (above).
+2.  For the cluster
+    %RED%mycluster with user
+    %RED%username and submit
+    host
+    %RED%mycluster-submit.mydomain.org
+    (Fully Qualified Domain Name, aka full hostname including the domain
+    name) and Local Resource Management System
+    %RED%LRMS (valid values
+    are `pbs`, `condor` or `lsf`). Replace the parts in red: \
+
+ bosco\_cluster --add
+%
+LRMS \
+%TWISTY\_OPTS\_DETAILED%  \
 -bash-3.2$ bosco\_cluster -add itbv-ce-pbs.uchicago.edu
+Enter password to copy ssh keys to itbv-ce-pbs.uchicago.edu: The
+authenticity of host 'itbv-ce-pbs.uchicago.edu (128.135.158.176)' can't
+be established. RSA key fingerprint is
+8e:a6:db:18:80:6b:b7:de:56:c8:5a:a2:75:19:11:8d. Are you sure you want
+to continue connecting (yes/no)? yes Warning: Permanently added
+'itbv-ce-pbs.uchicago.edu,128.135.158.176' (RSA) to the list of known
+hosts. Installing BOSCO on itbv-ce-pbs.uchicago.edu... Installation
+complete \ 
+
+When you add your first cluster, BOSCO will prompt you for a password
+that will be used to store the SSH keys used by BOSCO to access all your
+clusters (`Enter password for bosco ssh key:`). Select a random string.
+It is preferable if you do not use the password you use to access the
+cluster or to unlock your SSH keys.
+
+Then, if you don't have a ssh key agent with that cluster enabled, you
+will be prompted for the password that you use to access the cluster you
+are adding to BOSCO (`Enter password to copy ssh keys to ...`). This may
+be followed by a confirmation of the RSA key fingerprint, if it is your
+first ssh connection from this host, where you have to answer `yes`.
+
+ You must be able to login to
+the remote cluster. If password authentication is OK, the script will
+ask you for your password. If key only login is allowed, then you must
+load your key in the `ssh-agent`. Here is an example adding the key and
+testing the login:
+%TWISTY\_OPTS\_DETAILED%  \
  ssh-agent
+SSH\_AUTH\_SOCK=/tmp/ssh-WdHXb17102/agent.17102; export SSH\_AUTH\_SOCK;
+SSH\_AGENT\_PID=17103; export SSH\_AGENT\_PID; echo Agent pid 17103;
+
+SSH\_AUTH\_SOCK=/tmp/ssh-WdHXb17102/agent.17102; export SSH\_AUTH\_SOCK;
+ SSH\_AGENT\_PID=17103;
+export SSH\_AGENT\_PID; 
+ssh-add id\_rsa\_bosco Enter passphrase for id\_rsa\_bosco: Identity
+added: id\_rsa\_bosco id\_rsa\_bosco
+ ssh  Last
+login: Thu Sep 13 13:49:33 2012 from uc3-bosco.mwt2.org $ logout
+\ 
+
+When adding the cluster, if the last message is `Done!`. Your cluster
+has been added successfully.
+
+You can see a list of the current clusters in BOSCO by typing:\
 
+bosco\_cluster --list \
+%TWISTY\_OPTS\_DETAILED%  \
 -bash-3.2$ bosco\_cluster --list
+itbv-ce-pbs.uchicago.edu \
+
+
+\---\#\# Submitting a test job You can send a simple test job to verify
+that the cluster added is working correctly.
+
+To send a BOSCO test job to the host
+%% (name as listed in
+the output of `bosco_cluster --list`):
+
+1.  Setup the environment appropriate for your shell as described in the
+    setup environment section (above).
+2.  For the cluster %%
+    (identical to output of `bosco_cluster --list`). Replace the parts
+    in red: \
+
+ bosco\_cluster --test
+%% \
+%TWISTY\_OPTS\_DETAILED%  \
  $
+bosco\_cluster -t  
+Testing ssh to ...Passed\! Testing bosco
+submission...Passed\! Checking for submission to remote pbs cluster
+(could take \~30 seconds)...Passed\! Submission files for these jobs are
+in /home/dweitzel/bosco/local.localhocentos56/bosco-test Execution on
+the remote cluster could take a while...Exiting \
+
+
+\---\#\# How to Stop and Remove To stop BOSCO:\
+ bosco\_stop \
+
+To uninstall BOSCO:
+
+  - If you want to remove remote clusters get the list and remove them
+    one by one: \

+    bosco\_cluster --list
+
+\# For each remote cluster 
+bosco\_cluster -r
+%RED%user@cluster\_as\_spelled\_in\_list%ENDCOLOR%\
+
+  - Remove the installation directory: \

+    bosco\_uninstall\
+
+ Uninstalling BOSCO removes the
+software but leaves the files in your `.bosco` and `.ssh` directories
+with all the information about the added clusters and the SSH keys.
+Files installed on the remote clusters are not removed either.
+
+\---\#\# How to Update BOSCO If you want to update BOSCO to a new
+version you have to:
+
+1.  setup BOSCO:\
 source
+    \~/bosco/bosco\_setenv\
+2.  stop BOSCO: \

+    bosco\_stop\
+3.  remove the old BOSCO: \

+    bosco\_uninstall\
+4.  download and install the new BOSCO (see install section above) and
+    re-add all the clusters in your setup:
+5.  for each installed cluster (the list is returned by `bosco_cluster
+    --list`):
+    1.  remove the cluster: \

+        bosco\_cluster --remove
+        %% \
+    2.  add the cluster: \

+        bosco\_cluster --add
+        %
+        queue \
+6.  start BOSCO: \

+    bosco\_start\
+
+This will update the local installation and the software on the remote
+clusters
+
+\---\# Job submission example
+
+You can submit a regular Condor vanilla job to BOSCO. The Campus
+Factory, a component within BOSCO, will take care to run it on the
+different clusters that you added and to transfer the input and output
+files as needed. Here follow a simple example. The Condor team provides
+[many great tutorials](http://research.cs.wisc.edu/condor/tutorials/) to
+learn more.
+
+\---\#\# Configuring Executable Your may wrap your job in a script (e.g.
+using your favorite shell or Python) so that you can invoke more
+executables and do some tests.
+
+This is a simple bash script, called `myjob.sh`: \
+\#/bin/bash
+
+\# Prepare for the execution
+
+\# Run the actual applications /bin/hostname /bin/date /usr/bin/id
+/usr/bin/whoami /bin/env | /bin/sort \> myout-file-$1
+
+\# Final steps \
+
+\---\#\# Example Submission File Here is an example submission file.
+Copy it to a file, `example.condor`
+
+``` file
+universe = vanilla
+output = cfjob.out.$(Cluster)-$(Process)
+error = cfjob.err.$(Cluster)-$(Process)
+Executable     = myjob.sh
+arguments = 10
+log = cfjob.log.$(Cluster)
+should_transfer_files = YES
+when_to_transfer_output = ON_EXIT
+queue 1
+```
+
+\---\#\# Job Submission Submit the job file `example.condor` with the
+`condor_submit` command: \
+ condor\_submit
+example.condor \
+
+\---\#\# Job Monitoring Monitor the job with `condor_q`. For example,
+the job when idle is: \
+ condor\_q
+
+\-- Submitter: uc3-c001.mwt2.org : \<10.1.3.101:45876\> :
+uc3-c001.mwt2.org ID OWNER SUBMITTED RUN\_TIME ST PRI SIZE CMD 12.0
+marco 7/6 13:45 0+00:00:00 I 0 0.0 short2.sh 10 13.0 marco 7/6 13:45
+0+00:00:00 I 0 0.0 short2.sh 10 14.0 marco 7/6 13:46 0+00:00:00 I 0 0.0
+glidein\_wrapper.sh 15.0 marco 7/6 13:46 0+00:00:00 I 0 0.0
+glidein\_wrapper.sh 16.0 marco 7/6 13:46 0+00:00:00 I 0 0.0
+glidein\_wrapper.sh 17.0 marco 7/6 13:46 0+00:00:00 I 0 0.0
+glidein\_wrapper.sh 18.0 marco 7/6 13:46 0+00:00:00 I 0 0.0
+glidein\_wrapper.sh 19.0 marco 7/6 13:46 0+00:00:00 I 0 0.0
+glidein\_wrapper.sh
+
+8 jobs; 0 completed, 0 removed, 8 idle, 0 running, 0 held, 0 suspended
+\
+
+**NOTE** That condor\_q will show also the glidein jobs. Auxiliary jobs
+that BOSCO is using to run your actual job. Like in the example above,
+job 11 was the one submitted.
+
+The job could be idle if it is currently idle at the remote cluster.
+When the job is being executed on the remote cluster, the `ST` (State)
+will change to `R`, and the `RUN_TIME` will grow.
+
+Another method of monitoring a job is to check the job's `log`, a human
+readable (for the most part) time line of actions relating to the job.
+The `logfile` was specified in the submission script, for example
+`logfile` in the example above. You can view the log file by using
+`cat`: \
+ cat logfile \
+
+\---\#\# Job output Once the job completes BOSCO will transfer back
+standard output, standard error and the output files (if specified in
+the submit file), e.g. the job above will create stdout and stderr files
+(unique for each submission) and a file `myout-file-10` in the directory
+where the `condor_submit` command was executed.
+
+\---\# Command summary | **Action** | **Arguments** | **Implicit Input**
+| **Output** | | bosco\_install | | | Success/Failure | | source
+bosco.\[csh,sh\] | | | | | bosco\_start | | | Success/Failure | |
+bosco\_stop | | | Success/Failure | | bosco\_uninstall | | |
+Success/Failure | | bosco\_cluster | --add  queue | |
+Success/Fail, entry in head node table | | | --list | Head-node table |
+List of added head nodes and their status | | | --test Hostname | Submit
+file | Status of submitted jobs | | | --remove Hostname | |
+Success/Fail, head node table with Hostname removed, delete if empty | |
+condor-\* | Various | Various | Various see the [Condor
+manual](http://research.cs.wisc.edu/condor/manual/) |
+
+\---\# Advanced use
+
+\---\#\# Multi homed hosts Multi homed hosts are hosts with multiple
+Network Interfaces (aka dual-homed when they have 2 NICs). BOSCO
+configuration is tricky on multi-homed hosts. BOSCO requires the submit
+host to be able to connect back to the BOSCO host, so it must advertise
+an interface that is reachable from all the chosen submit hosts. E.g. a
+host with a NIC on a private network and one with a public IP address
+must advertise the public address if the submit hosts are outside of the
+private network. In order to do that you have to:
+
+  - make sure that the name returned by the command `/bin/hostname -f`
+    is the name resolving in the public address (e.g. `` host `hostname
+    -f` `` should return the public address). If not you should change
+    it.
+  - edit `~/bosco/local.%RED%$HOST%ENDCOLOR%/condor_config.local` (HOST
+    is the short host name) and add a line like =NETWORK\_INTERFACE =
+    xxx.xxx.xxx.xxx= , substituting xxx.xxx.xxx.xxx with the public IP
+    address. This will tell BOSCO to use that address.
+
+\---\#\# Multi-User BOSCO For multi-user BOSCO, special care must be
+given to enable multiple users to use the same bosco condor instance.
+
+First, you must install the BOSCO instance just as before in the above
+instructions. Second, you install a system wide condor instance. Only
+the BOSCO instance will run, but it will run as root.
+
+We will assume that BOSCO was installed as user `bosco` for the sake of
+this guide.
+
+1.  As the bosco user, stop bosco: \
+
+ bosco\_stop \
+
+1.  As the bosco user, edit
+    `$HOME/bosco/local.%RED%HOST%ENDCOLOR%/config/condor_config.factory`.
+    Modify the ENVIRONMENT to include bosco's home directory, and add
+    the USERID line. \
+
+CAMPUSFACTORY\_ENVIRONMENT =
+"PYTHONPATH=$(LIBEXEC)/campus\_factory/python-lib
+CAMPUSFACTORY\_DIR=$(LIBEXEC)/campus\_factory
+\_campusfactory\_GLIDEIN\_DIRECTORY=$CAMPUSFACTORY\_DIR/share/glidein\_jobs
+%RED%HOME=/home/bosco%ENDCOLOR%" %RED%CAMPUSFACTORY\_USERID =
+bosco \
+
+1.  As root, change ownership of the password file and start the bosco
+    instance of condor: \
+
+ source
+\~bosco/bosco/bosco\_setenv
+ chown root:
+\`condor\_config\_val SEC\_PASSWORD\_FILE\`
+ bosco\_start\
+
+1.  As root, install Condor into a global directory. For example, you
+    may install the Condor RPM. Do not start this instance of Condor.
+2.  Now, as any user, you may run the test job given in [Validation of
+    Success](#Validation_of_Success)
+
+\---\# Troubleshooting
+
+\---\#\# Useful Configuration and Log Files BOSCO underneath is using
+Condor. You can find all the Condor log files in
+`~/bosco/local.HOSTNAME/log`
+
+\---\#\# Known Issues
+
+The current version is not supporting Condor clusters. You can add them
+but jobs will fail.
+
+\---\#\# Cannot find the BOSCO download file Sometime the filename is
+changed without updating the link in this document (e.g. changing the
+RHAP version from 5.7 to 5.8). If the link is broken please open the
+repository  in a Web browser
+and find the exact link. Please also report to us the mismatch so that
+we can fix this document. Thank you.
+
+\---\#\# Make sure that you can connect to the BOSCO host If you see
+errors like:\
 Installing BOSCO on
+... ssh: connect to host
+osg-ss-submit.chtc.wisc.edu port 22: Connection timed out rsync:
+connection unexpectedly closed (0 bytes received so far) \[sender\]
+rsync error: unexplained error (code 255) at io.c(600) \[sender=3.0.6\]
+ssh: connect to host osg-ss-submit.chtc.wisc.edu port 22: Connection
+timed out rsync: connection unexpectedly closed (0 bytes received so
+far) \[sender\] rsync error: unexplained error (code 255) at io.c(600)
+\[sender=3.0.6\] ssh: connect to host osg-ss-submit.chtc.wisc.edu port
+22: Connection timed out rsync: connection unexpectedly closed (0 bytes
+received so far) \[sender\] rsync error: unexplained error (code 255) at
+io.c(600) \[sender=3.0.6\] \ Please try manually to ssh from the
+BOSCO host to the cluster submit node. The ability to connect is
+required in order to install BOSCO.
+
+\---\#\# Make sure that BOSCO is running BOSCO may not survive after you
+log out. When you log back in after sourcing the setup (`source
+~/bosco/bosco_setenv`), you should start BOSCO (`bosco_start`) specially
+if the command `condor_q` is failing.
+
+\---\#\# Errors due to leftover files Bosco files on the submit host are
+in:
+
+  - `~/bosco/` - the release directory
+  - `~/.bosco/` - some service files
+  - `~/.ssh/` - the ssh key used by BOSCO
+
+If you used `bosco_uninstall` it will remove all BOSCO related files. If
+you removed BOSCO by hand you must pay attention. If the service key is
+still in `.ssh` but the other files are missing, during the execution of
+BOSCO commands you will get some unclear errors like *"IOError: \[Errno
+2\] No such file or directory: '/home/marco/.bosco/.pass'"* , *"OSError:
+\[Errno 5\] Input/output error"* , all followed by:
+
+    Password-less ssh to marco@itb2.uchicago.edu did NOT work, even after adding the ssh-keys.
+    Does the remote resource allow password-less ssh?
+
+If that happens you can remove the service files and the keys
+using:\rm -r \~/.bosco rm \~/.ssh/bosco\_key.rsa\*\ and
+then re-add all the clusters with `bosco_cluster --add`.
+
+\---\# Get Help/Support To get assistance you can send an email to
+
+
+\---\# References Campus Grids related documents:
+
+  - 
+  - 
+
+Condor documents:
+
+  - Condor manual: 
+
+How to submit Condor jobs:
+
+  - Quick start: 
+  - Tutorial:
+    
+  - Condor manual:
+    
+
+Here you can check out older releases:
+
+  - [BOSCO version 0](BoSCOv0)
+  - [BOSCO version 1](BoSCOv1) (Release documented here)
+  - [BOSCO version 1.1](BoSCOv1p1) (Following release)
+
+# Comments
+
+
diff --git a/docs/BoSCOv1p1.md b/docs/BoSCOv1p1.md
new file mode 100644
index 0000000..dead4cd
--- /dev/null
+++ b/docs/BoSCOv1p1.md
@@ -0,0 +1,295 @@
+%META:TOPICINFO{author="KyleGross" date="1465334043" format="1.1"
+version="1.15"}% %META:TOPICPARENT{name="BoSCO"}%
+
+# BOSCO
+
+
+
+\---\# Introduction
+
+BOSCO is a job submission manager designed to help researchers manage
+large numbers (\~1000s) of job submissions to the different resources
+that they can access on a campus (initially a PBS cluster running
+Linux). This is release 1.1 of BOSCO, if you find any problems or need
+help installing or running BoSCO, please email
+ . BOSCO 1.1 is now available as
+release.
+
+It offers the following capabilities:
+
+  - Jobs are automatically resubmitted when they fail. The researcher
+    does not need to babysit their jobs.
+  - Job submissions can be throttled to meet batch scheduler settings
+    (e.g. only 10 jobs running concurrently). The researcher does not
+    need to make multiple submissions. BOSCO handles that for them.
+  - BOSCO is designed to be flexible and allows jobs to be submitted to
+    multiple clusters, with different job schedulers (e.g. PBS, LSF,
+    Condor).
+
+The primary advantage for the researcher is that they only need to learn
+one job scheduler environment even if the clusters utilize different
+native environments.
+
+%TWISTY\_OPTS\_DETAILED%
+showlink="Click to see the format conventions used in this document"
+
+Trash/DocumentationTeam.DocConventions
+Trash/DocumentationTeam.DocConventions
+
+
+\---\# BOSCO versions and BOSCO documents \
+
+The BOSCO submit node is the host where BOSCO is installed and where
+user/s login to submit jobs via BOSCO. The multiple clusters added to
+BOSCO (i.e. where the user/s can submit jobs via BOSCO) are referred as
+BOSCO resources.
+
+This page explains how to use BOSCO. Before using BOSCO you, or someone
+for you, will have to install a version of BOSCO and add at least one
+BOSCO resource. Adding, testing and removing BOSCO resources is part of
+the BOSCO configuration.
+
+[BOSCO Single User](BoscoInstall) allows a researcher to install BOSCO
+in her/his (non-privileged) account, to configure it and to use it To
+install and configure BOSCO Single-user read [BOSCO Single User
+Installation](BoscoInstall).
+
+[BOSCO Multi User](BoscoMultiUser) is installed, configured and started
+on a host by the system administrator (root) and is available to all the
+users on the host. To install and configure BOSCO Multi-user read [BOSCO
+Multi User Installation](BoscoMultiUser).
+
+Later in this document we'll assume that BOSCO has been already
+installed and configured correctly. For the installation or to change
+the configuration (e.g. to add or remove BOSCO resources) please check
+the other documents: [BOSCO Single User Installation](BoscoInstall) and
+[BOSCO Multi User Installation](BoscoMultiUser).
+
+\---\# Requirements
+
+There are specific requirements for the BOSCO resources that are
+specified in the install documents.
+
+To use BOSCO you need a BOSCO submit host with BOSCO installed and
+configured correctly. All requirements for the BOSCO submit host and the
+BOSCO resources, as well as the requirements to include BOSCO in a more
+complex Condor setup, are described in the install documents.
+
+\---\# How to Install
+
+Either you or a system administrator for you will have to install and
+setup BOSCO. The installation consists in downloading and installing the
+BOSCO software. The setup consists in managing which clusters are
+included in the BOSCO pool and will execute your jobs; it includes all
+the operations performed using the `bosco_cluster` command. The
+installation and setup are covered in two separate documents:
+
+  - to install and setup BOSCO so that it is used and configured from a
+    single user account please refer to BoscoInstall
+  - to install and setup BOSCO so that it is configured from a single
+    user account but it can be used by all the accounts on the host
+    please refer to BoscoMultiUser
+
+\---\# How to Use
+
+In order to use BOSCO and submit a job:
+
+1.  BOSCO must be installed
+2.  BOSCO must be running (it must have been started)
+3.  At least one cluster must have been added to BOSCO
+4.  You must setup the environment
+
+\---\#\# Starting/Stopping and Configuring BOSCO Each time I mention
+"you" in this section I refer either to you or to a system administrator
+that acts on your behalf, probably the same person that installed BOSO.
+
+BOSCO has some persistent services that must be running. You'll have to
+start it at the beginning and probably after each reboot of your host.
+You should stop BOSCO before an upgrade and possibly before a shutdown
+of your host. If you will not use BOSCO anymore, uninstall will remove
+it from your system.
+
+You need to add to BOSCO all the clusters of which you like to use the
+resources. In order to run jobs you need at least one.
+
+Please refer to the BoscoInstall or BoscoMultiUser documents for
+operations including:
+
+  - starting BOSCO
+  - stopping BOSCO
+  - updating BOSCO
+  - uninstalling BOSCO
+  - adding one or more clusters to BOSCO
+
+\---\#\# Setup environment before using Since BOSCO is not installed in
+the system path. An environment file must be sourced all the times you
+use BOSCO (start/stop/job submission or query, anything):
+
+``` screen
+%UCL_PROMPT% source ~/bosco/bosco_setenv
+```
+
+BoscoJob ---\# Job
+submission example
+
+You can submit a regular Condor vanilla job to BOSCO. The Campus
+Factory, a component within BOSCO, will take care to run it on the
+different clusters that you added and to transfer the input and output
+files as needed. Here is a simple example. The Condor team provides
+[many great tutorials](http://research.cs.wisc.edu/condor/tutorials/) to
+learn more.
+
+\---\#\# Configuring Executable Your may wrap your job in a script (e.g.
+using your favorite shell or Python) so that you can invoke more
+executables and do some tests.
+
+This is a simple bash script, called `myjob.sh`: \
+\#/bin/bash
+
+\# Prepare for the execution
+
+\# Run the actual applications hostname date id whoami
+
+\# Final steps
+
+\
+
+\---\#\# Example Submission File With BOSCO you can do direct submission
+to the cluster, using the grid universe, or use the the glideins so that
+regular (vanilla) HTCondor jobs can be used. There is a small difference
+between the 2 options is in the submit file (see below) and vanilla have
+some additional [Firewall and Network
+requirements](BoscoInstall#FirewallReq) because they use glideins. All
+the other steps, job file creation, job submission and checking the
+jobs, are the same.
+
+**Use one or the other**
+
+\---\#\#\# Direct Job submission example
+
+Here is an example submission file for direct submission. Copy it to a
+file, `example.condor`
+
+``` file
+universe = grid
+grid_resource = batch %RED%pbs%ENDCOLOR% marco@uc3-pbs.uchicago.edu
+Executable     = myjob.sh
+arguments = )
+output = myjob.out
+error = myjob.error
+log = myjob.log
+transfer_output_files = 
+should_transfer_files = YES
+when_to_transfer_output = ON_EXIT
+queue 1
+```
+
+The type of cluster that you are submitting to, pbs, lsf, sge, or
+condor, must be supplied on the grid\_resource line.
+
+\---\#\#\# Glidein Job submission example
+
+You can submit a regular HTCondor vanilla job to BOSCO. The Campus
+Factory, a component within BOSCO, will take care to run it on the
+different clusters that you added and to transfer the input and output
+files as needed. Here follow a simple example. The Condor team provides
+[many great tutorials](http://research.cs.wisc.edu/condor/tutorials/) to
+learn more.
+
+Here is an example of a vanilla submission file (using glideins). Copy
+it to a file, `example.condor`
+
+``` file
+universe = vanilla
+Executable     = myjob.sh
+arguments = $(Cluster) $(Process)
+output = cfjob.out.$(Cluster)-$(Process)
+error = cfjob.err.$(Cluster)-$(Process)
+log = cfjob.log.$(Cluster)
+should_transfer_files = YES
+when_to_transfer_output = ON_EXIT
+queue 1
+```
+
+ The BOSCO submit host needs to
+satisfy these additional [Firewall and Network
+requirements](BoscoInstall#FirewallReq) to be able to submit and run
+vanilla jobs. Those requirement include being reachable by all BOSCO
+resources.
+
+\---\#\# Job Submission Submit the job file `example.condor` with the
+`condor_submit` command: \
+ condor\_submit
+example.condor \
+
+\---\#\# Job Monitoring Monitor the job with `condor_q`. For example,
+the job when idle is: \
+ condor\_q
+
+\-- Submitter: uc3-c001.mwt2.org : \<10.1.3.101:45876\> :
+uc3-c001.mwt2.org ID OWNER SUBMITTED RUN\_TIME ST PRI SIZE CMD 12.0
+marco 7/6 13:45 0+00:00:00 I 0 0.0 short2.sh 10 13.0 marco 7/6 13:45
+0+00:00:00 I 0 0.0 short2.sh 10 14.0 marco 7/6 13:46 0+00:00:00 I 0 0.0
+glidein\_wrapper.sh 15.0 marco 7/6 13:46 0+00:00:00 I 0 0.0
+glidein\_wrapper.sh 16.0 marco 7/6 13:46 0+00:00:00 I 0 0.0
+glidein\_wrapper.sh 17.0 marco 7/6 13:46 0+00:00:00 I 0 0.0
+glidein\_wrapper.sh 18.0 marco 7/6 13:46 0+00:00:00 I 0 0.0
+glidein\_wrapper.sh 19.0 marco 7/6 13:46 0+00:00:00 I 0 0.0
+glidein\_wrapper.sh
+
+8 jobs; 0 completed, 0 removed, 8 idle, 0 running, 0 held, 0 suspended
+\
+
+**NOTE** That condor\_q will show also the glidein jobs. Auxiliary jobs
+that BOSCO is using to run your actual job. Like in the example above,
+job 11 was the one submitted.
+
+The job could be idle if it is currently idle at the remote cluster.
+When the job is being executed on the remote cluster, the `ST` (State)
+will change to `R`, and the `RUN_TIME` will grow.
+
+Another method of monitoring a job is to check the job's `log`, a human
+readable (for the most part) time line of actions relating to the job.
+The `logfile` was specified in the submission script, for example
+`logfile` in the example above. You can view the log file by using
+`cat`: \
+ cat logfile \
+
+\---\#\# Job output Once the job completes BOSCO will transfer back
+standard output, standard error and the output files (if specified in
+the submit file), e.g. the job above will create stdout and stderr files
+(unique for each submission) and a file `myout-file-10` in the directory
+where the `condor_submit` command was executed.
+BoscoJob
+
+\---\# Troubleshooting
+
+\---\#\# Useful Configuration and Log Files BOSCO underneath is using
+Condor. You can find all the Condor log files in
+`~/bosco/local.HOSTNAME/log`
+
+\---\#\# Known Issues
+
+\---\#\# Make sure that BOSCO is running BOSCO may not survive after you
+log out. When you log back in after sourcing the setup (`source
+~/bosco/bosco_setenv`), if you are using BOSCO single-user you should
+start BOSCO (`bosco_start`) specially if the command `condor_q` is
+failing. More details about starting BOSCO are in BoscoInstall and
+BoscoMultiUser
+
+\---\# Get Help/Support To get assistance you can send an email to
+
+
+BoscoInstall
+
+# Comments
+
+
+
+%META:FILEATTACHMENT{name="bosco-submit\_and\_resource.jpg"
+attachment="bosco-submit\_and\_resource.jpg" attr="" comment=""
+date="1352242706" path="bosco-submit\_and\_resource.jpg" size="36728"
+stream="bosco-submit\_and\_resource.jpg"
+tmpFilename="/usr/tmp/CGItemp12909" user="MarcoMambelli" version="1"}%
diff --git a/docs/BoSCOv1p2.md b/docs/BoSCOv1p2.md
new file mode 100644
index 0000000..8bd9fa7
--- /dev/null
+++ b/docs/BoSCOv1p2.md
@@ -0,0 +1,18 @@
+%META:TOPICINFO{author="MarcoMambelli" date="1370361394" format="1.1"
+version="1.2"}% %META:TOPICPARENT{name="BoSCO"}%
+
+# BoSCO v1.2 has been promoted to our current release, this page has been moved to the [main BoSCO page](BoSCO)
+
+## Release Notes
+
+  - [Bosco 1.2 Release Notes](CampusGrids.Bosco1p2ReleaseNotes)
+
+## Installation Documents
+
+  - [Bosco Single User Install](BoscoInstall)
+  - [Bosco Multi-User Install](BoscoMultiUser)
+  - [Bosco Quick Start Guide](BoscoQuickStart)
+
+## Configuration Options
+
+  - [Bosco Glidein Configuration](BoscoGlideinConf)
diff --git a/docs/BoscoInstall.md b/docs/BoscoInstall.md
index 31488e6..31cf9f2 100644
--- a/docs/BoscoInstall.md
+++ b/docs/BoscoInstall.md
@@ -27,12 +27,11 @@ native environments.
 
 \* SLURM support is via its PBS emulation
 
-
 This document explains how to install, configure and use BOSCO for a
 single user. We recommend to use the [Bosco Quick Start](BoscoQuickStart) guide (less
 flexible but easier and guided setup), if you plan to install BOSCO only
 for you (single user) and to connect it to only one cluster.
-[Bosco Quick Start](BoscoQuickStart) will give you a full installation but to learn how to
+[Bosco Quick Start](BoscoQuickStart.md) will give you a full installation but to learn how to
 connect to multiple resources you have to read this document, it is not
 explained in the quick start guide.
 
@@ -146,13 +145,14 @@ BOSCO submit host is required.
 ## How to Use
 
 Now BOSCO is installed. To use it:
-   1. Setup the environment
-   1. Add all the desired clusters (at least one)
-   1. Start BOSCO
-   1. Submit a test job
-   1. Ready to submit a real job
 
-### Setup environment before using
+1. Setup the environment
+1. Add all the desired clusters (at least one)
+1. Start BOSCO
+1. Submit a test job
+1. Ready to submit a real job
+
+### Setup environment before using
 
 Since BOSCO is not installed in the system path. An environment file must be sourced all the times you use BOSCO (start/stop/job submission or query, anything):
 
@@ -173,7 +173,7 @@ To start BOSCO:
 
 To add a new cluster to the resources you will be using through BOSCO:
 
-1. Setup the environment appropriate for your shell as described in the [setup environment section](#SetupEnvironment) (above).
+1. Setup the environment appropriate for your shell as described in the [Setup environment before using](#setup-environment-before-using) (above).
    
 1. For the cluster `mycluster` with user `username` and submit host `mycluster-submit.mydomain.org` (Fully Qualified Domain Name, aka full hostname including the domain name) and Local Resource Management System `LRMS` (valid values are *pbs*, *condor*, *sge* or *lsf*). Replace the parts in red: 
 
@@ -280,6 +280,7 @@ To stop BOSCO:
 
 
 To uninstall BOSCO:
+
 1. If you want to remove remote clusters get the list and remove them one by one:
 
         $ bosco_cluster --list
@@ -295,122 +296,144 @@ To uninstall BOSCO:
 !!! note
     Uninstalling BOSCO removes the software but leaves the files in your =.bosco= and =.ssh= directories with all the information about the added clusters and the SSH keys. Files installed on the remote clusters are not removed either. 
 
----## How to Update BOSCO
+### How to Update BOSCO
 If you want to update BOSCO to a new version you have to:
-   1. setup BOSCO:
%UCL_PROMPT% source ~/bosco/bosco_setenv
- 1. stop BOSCO:
%UCL_PROMPT% bosco_stop
- 1. remove the old BOSCO:
%UCL_PROMPT% bosco_uninstall
+ + 1. setup BOSCO: + + $ source ~/bosco/bosco_setenv + + 1. stop BOSCO: + + $ bosco_stop + + 1. remove the old BOSCO: + + $ bosco_uninstall + 1. download and install the new BOSCO (see install section above) and re-add all the clusters in your setup: - 1. for each installed cluster (the list is returned by =bosco_cluster --list=): - 1. remove the cluster:
%UCL_PROMPT% bosco_cluster --remove %RED%username@mycluster-submit.mydomain.org%ENDCOLOR% 
- 1. add the cluster:
%UCL_PROMPT% bosco_cluster --add %RED%username@mycluster-submit.mydomain.org queue%ENDCOLOR% 
- 1. start BOSCO:
%UCL_PROMPT% bosco_start
+ 1. for each installed cluster (the list is returned by *bosco_cluster --list*): + 1. remove the cluster: + + $ bosco_cluster --remove username@mycluster-submit.mydomain.org + + 1. add the cluster: + + $ bosco_cluster --add username@mycluster-submit.mydomain.org queue + + 1. start BOSCO: + + $ bosco_start This will update the local installation and the software on the remote clusters -%ENDSECTION{"BoscoSetup"}% -%STARTSECTION{"BoscoJob"}% ----# Job submission example +### Job submission example You can submit a regular Condor vanilla job to BOSCO. The Campus Factory, a component within BOSCO, will take care to run it on the different clusters that you added and to transfer the input and output files as needed. -Here is a simple example. The Condor team provides [[http://research.cs.wisc.edu/condor/tutorials/][many great tutorials]] to learn more. - - ----## Configuring Executable -Your may wrap your job in a script (e.g. using your favorite shell or Python) so that you can invoke more executables and do some tests. +Here is a simple example. The Condor team provides [many great tutorials](http://research.cs.wisc.edu/condor/tutorials/) to learn more. -This is a simple bash script, called =myjob.sh=:
-#!/bin/bash
+### Configuring Executable
 
-# Prepare for the execution
+Your may wrap your job in a script (e.g. using your favorite shell or Python) so that you can invoke more executables and do some tests
 
-# Run the actual applications
-hostname 
-date 
-id 
-whoami 
+This is a simple bash script, called *myjob.sh*:
 
-# Final steps
+    #!/bin/bash
+    # Prepare for the execution
+    # Run the actual applications
+    hostname 
+    date 
+    id 
+    whoami 
 
-
+## Final steps +### Example Submission File ----## Example Submission File With BOSCO you can do direct submission to the cluster, using the grid universe, or use the the glideins so that regular (vanilla) HTCondor jobs can be used. -There is a small difference between the 2 options is in the submit file (see below) and vanilla have some additional [[BoscoInstall#FirewallReq][Firewall and Network requirements]] because they use glideins. +There is a small difference between the 2 options is in the submit file (see below) and vanilla have some additional +[Firewall and Network requirements](#networking) because they use glideins. All the other steps, job file creation, job submission and checking the jobs, are the same. *Use one or the other* ----### Direct Job submission example - -Here is an example submission file for direct submission. Copy it to a file, =example.condor= -
-universe = grid
-grid_resource = batch %RED%pbs%ENDCOLOR% marco@uc3-pbs.uchicago.edu
-Executable     = myjob.sh
-arguments = 
-output = myjob.out
-error = myjob.error
-log = myjob.log
-transfer_output_files = 
-should_transfer_files = YES
-when_to_transfer_output = ON_EXIT
-queue 1
-
+### Direct Job submission example + +Here is an example submission file for direct submission. Copy it to a file, *example.condor*: + + universe = grid + grid_resource = batch %RED%pbs%ENDCOLOR% marco@uc3-pbs.uchicago.edu + Executable = myjob.sh + arguments = + output = myjob.out + error = myjob.error + log = myjob.log + transfer_output_files = + should_transfer_files = YES + when_to_transfer_output = ON_EXIT + queue 1 The type of cluster that you are submitting to, pbs, lsf, sge, or condor, must be supplied on the grid_resource line. ----## Job Submission -Submit the job file =example.condor= with the =condor_submit= command:
-%UCL_PROMPT% condor_submit example.condor
-
+### Job Submission +Si +ubmit the job file *example.condor* with the *condor_submit* command: ----## Job Monitoring -Monitor the job with =condor_q=. For example, the job when idle is:
-%UCL_PROMPT% condor_q
-
--- Submitter: uc3-c001.mwt2.org : <10.1.3.101:45876> : uc3-c001.mwt2.org
- ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD               
-  12.0   marco           7/6  13:45   0+00:00:00 I  0   0.0  short2.sh 10      
-  13.0   marco           7/6  13:45   0+00:00:00 I  0   0.0  short2.sh 10      
-  14.0   marco           7/6  13:46   0+00:00:00 I  0   0.0  glidein_wrapper.sh
-  15.0   marco           7/6  13:46   0+00:00:00 I  0   0.0  glidein_wrapper.sh
-  16.0   marco           7/6  13:46   0+00:00:00 I  0   0.0  glidein_wrapper.sh
-  17.0   marco           7/6  13:46   0+00:00:00 I  0   0.0  glidein_wrapper.sh
-  18.0   marco           7/6  13:46   0+00:00:00 I  0   0.0  glidein_wrapper.sh
-  19.0   marco           7/6  13:46   0+00:00:00 I  0   0.0  glidein_wrapper.sh
-
-8 jobs; 0 completed, 0 removed, 8 idle, 0 running, 0 held, 0 suspended
-
+ $ condor_submit example.condor -*NOTE* That condor_q will show also the glidein jobs. Auxiliary jobs that BOSCO is using to run your actual job. Like in the example above, job 11 was the one submitted. +### Job Monitoring -The job could be idle if it is currently idle at the remote cluster. When the job is being executed on the remote cluster, the =ST= (State) will change to =R=, and the =RUN_TIME= will grow. +Monitor the job with *condor_q*. For example, the job when idle is: + + $ condor_q + -- Submitter: uc3-c001.mwt2.org : <10.1.3.101:45876> : uc3-c001.mwt2.org + ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD + 12.0 marco 7/6 13:45 0+00:00:00 I 0 0.0 short2.sh 10 + 13.0 marco 7/6 13:45 0+00:00:00 I 0 0.0 short2.sh 10 + 14.0 marco 7/6 13:46 0+00:00:00 I 0 0.0 glidein_wrapper.sh + 15.0 marco 7/6 13:46 0+00:00:00 I 0 0.0 glidein_wrapper.sh + 16.0 marco 7/6 13:46 0+00:00:00 I 0 0.0 glidein_wrapper.sh + 17.0 marco 7/6 13:46 0+00:00:00 I 0 0.0 glidein_wrapper.sh + 18.0 marco 7/6 13:46 0+00:00:00 I 0 0.0 glidein_wrapper.sh + 19.0 marco 7/6 13:46 0+00:00:00 I 0 0.0 glidein_wrapper.sh + + 8 jobs; 0 completed, 0 removed, 8 idle, 0 running, 0 held, 0 suspended + + +!!! note + That condor\_q will show also the glidein jobs. Auxiliary jobs that BOSCO is using to run your actual job. Like in the example above, job 11 was the one submitted. -Another method of monitoring a job is to check the job's =log=, a human readable (for the most part) time line of actions relating to the job. The =logfile= was specified in the submission script, for example =logfile= in the example above. You can view the log file by using =cat=:
-%UCL_PROMPT% cat logfile
-
----## Job output -Once the job completes BOSCO will transfer back standard output, standard error and the output files (if specified in the submit file), e.g. the job above will create stdout and stderr files (unique for each submission) and a file =myout-file-10= in the directory where the =condor_submit= command was executed. -%ENDSECTION{"BoscoJob"}% +The job could be idle if it is currently idle at the remote cluster. When the job is being executed on the remote cluster, the *ST* (State) will change to *R*, and the *RUN_TIME* will grow. + +Another method of monitoring a job is to check the job's *log*, a human readable (for the most part) time line of actions relating to the job. The *logfile* was specified in the submission script, for example *logfile* in the example above. You can view the log file by using *cat*: + + $ cat logfile + +### Job output +Once the job completes BOSCO will transfer back standard output, standard error and the output files (if specified in the submit file), e.g. the job above will create stdout and stderr files (unique for each submission) and a file *myout-file-10* in the directory where the *condor_submit* command was executed. %STARTSECTION{"BoscoCommands"}% ----# Command summary +### Command summary + User commands: + | *Action* | *Arguments* | *Implicit Input* | *Output* | -| condor_* | Various | Various | Various see the [[http://research.cs.wisc.edu/condor/manual/][Condor manual]] | +|-----------|---------------|------------------|----------| +| condor_* | Various | Various | Various see the [Condor manual](http://research.cs.wisc.edu/condor/manual/)| + +There are many Condor commands. The most common user commands are *condor_q*, *condor_submit*, *condor_rm* and *condor_status* . -There are many Condor commands. The most common user commands are =condor_q=, =condor_submit=, =condor_rm= and =condor_status= . Administration commands: + | *Action* | *Arguments* | *Implicit Input* | *Output* | +------------|---------------|------------------|----------| | bosco_install | | | Success/Failure | -| source bosco.[csh,sh] | | | | +| source bosco.\[csh,sh\] | | | | | bosco_start | | | Success/Failure | | bosco_stop | | | Success/Failure | | bosco_uninstall | | | Success/Failure | @@ -418,315 +441,365 @@ Administration commands: | | --list | Head-node table | List of add head nodes and their status | | | --test Hostname | Submit file | Status of submitted jobs | | | --remove Hostname | | Success/Fail, head node table with Hostname removed, delete if empty | -| condor_* | Various | Various | Various see the [[http://research.cs.wisc.edu/condor/manual/][Condor manual]] | +| condor_* | Various | Various | Various see the [Condor manual](http://research.cs.wisc.edu/condor/manual/)| | Manually transfer output data from batch system. -%ENDSECTION{"BoscoCommands"}% + %STARTSECTION{"BoscoMultiCluster"}% ----# Multi-Cluster Bosco +### Multi-Cluster Bosco In order to use Multi-Cluster Bosco, you must make 1 configuration change. The multi-cluster also requires a public IP address. ----## Changing the Bosco Configuration for Multi-Cluster -BOSCO by default is using the loopback IP address. You must change the configuration to listen on the public interface. You can do this by editing the configuration file =$HOME/bosco/local.bosco/config/condor_config.factory=, adding anywhere the line:
-NETWORK_INTERFACE = 
-
+#### Changing the Bosco Configuration for Multi-Cluster + +BOSCO by default is using the loopback IP address. You must change the configuration to listen on the public interface. You can do this by editing the configuration file *$HOME/bosco/local.bosco/config/condor_config.factory*, adding anywhere the line: + + NETWORK_INTERFACE = By setting this, you are enabling Bosco's smart interface detection which will automatically choose and listen on the public interface. ----## Glidein Job submission example +#### Glidein Job submission example You can submit a regular HTCondor vanilla job to BOSCO. The Campus Factory, a component within BOSCO, will take care to run it on the different clusters that you added and to transfer the input and output files as needed. -Here follow a simple example. The Condor team provides [[http://research.cs.wisc.edu/condor/tutorials/][many great tutorials]] to learn more. - -Here is an example of a vanilla submission file (using glideins). Copy it to a file, =example.condor= -
-universe = vanilla
-Executable     = myjob.sh
-arguments = $(Cluster) $(Process)
-output = cfjob.out.$(Cluster)-$(Process)
-error = cfjob.err.$(Cluster)-$(Process)
-log = cfjob.log.$(Cluster)
-should_transfer_files = YES
-when_to_transfer_output = ON_EXIT
-queue 1
-
+Here follow a simple example. The Condor team provides [many great tutorials](http://research.cs.wisc.edu/condor/tutorials/) to learn more. -%NOTE% The BOSCO submit host needs to satisfy these additional [[BoscoInstall#FirewallReq][Firewall and Network requirements]] to be able to submit and run vanilla jobs. Those requirement include being reachable by all BOSCO resources. +Here is an example of a vanilla submission file (using glideins). Copy it to a file, *example.condor* + + universe = vanilla + Executable = myjob.sh + arguments = $(Cluster) $(Process) + output = cfjob.out.$(Cluster)-$(Process) + error = cfjob.err.$(Cluster)-$(Process) + log = cfjob.log.$(Cluster) + should_transfer_files = YES + when_to_transfer_output = ON_EXIT + queue 1 + +!!! note + The BOSCO submit host needs to satisfy these additional [Firewall and Network requirements](#networking) to be able to submit and run vanilla jobs. Those requirement include being reachable by all BOSCO resources. %STARTSECTION{"BoscoAdvancedUse"}% ----# Advanced use +### Advanced use %STARTSECTION{"BoscoAdvancedUseInstContent"}% ----## Changing the BOSCO port -BOSCO is using the HTCondor [[http://research.cs.wisc.edu/htcondor/manual/latest/3_7Networking_includes.html#SECTION00472000000000000000][Shared port daemon]]. This means that all the communication are coming to the same port, by default 11000. If that port is taken (already bound), the [[BoscoQuickStart][quick start installer]] will select the first available port. You can check and edit manually the port used by BOSCO in the file =$HOME/bosco/local.bosco/config/condor_config.factory=. You can change the port passed to the shared port daemon (in %RED%red%ENDCOLOR%):
# Enabled Shared Port
-USE_SHARED_PORT = True
-SHARED_PORT_ARGS = -p %RED%11000%ENDCOLOR%
-
-%NOTE% You need to restart BOSCO after you change the configuration (=bosco_stop; bosco_start=). +#### Changing the BOSCO port + +BOSCO is using the HTCondor [Shared port daemon](https://htcondor.readthedocs.io/en/latest/admin-manual/networking.html#reducing-port-usage-with-the-condor-shared-port-daemon). +This means that all the communication are coming to the same port, by default 11000. If that port is taken (already bound), the [quick start installer](BoscoQuickStart) +will select the first available port. You can check and edit manually the port used by BOSCO in the file *$HOME/bosco/local.bosco/config/condor_config.factory*. +You can change the port passed to the shared port daemon: -If you are referring to this BOSCO pool (e.g. for flocking) you'll need to use a string like: =%RED%your_host.domain%ENDCOLOR%:%RED%11000%ENDCOLOR%?sock=collector= . + # Enabled Shared Port + USE_SHARED_PORT = True + SHARED_PORT_ARGS = -p 11000 + +!!! note + You need to restart BOSCO after you change the configuration (*bosco_stop; bosco_start*). + +If you are referring to this BOSCO pool (e.g. for flocking) you'll need to use a string like: *your_host.domain:11000?sock=collector* . Replace host and port with the correct ones. ----## Multi homed hosts +#### Multi homed hosts + Multi homed hosts are hosts with multiple Network Interfaces (aka dual-homed when they have 2 NICs). BOSCO configuration is tricky on multi-homed hosts. BOSCO requires the submit host to be able to connect back to the BOSCO host, so it must advertise an interface that is reachable from all the chosen submit hosts. E.g. a host with a NIC on a private network and one with a public IP address must advertise the public address if the submit hosts are outside of the private network. In order to do that you have to: - * make sure that the name returned by the command =/bin/hostname -f= is the name resolving in the public address (e.g. =host `hostname -f`= should return the public address). If not you should change it. - * edit =~/bosco/local.%RED%$HOST%ENDCOLOR%/condor_config.local= (HOST is the short host name) and add a line like =NETWORK_INTERFACE = xxx.xxx.xxx.xxx= , substituting xxx.xxx.xxx.xxx with the public IP address. This will tell BOSCO to use that address. + - make sure that the name returned by the command `/bin/hostname -f` is the name resolving in the public address (e.g. `host hostname -f`= should return the public address). If not you should change it. + + - edit `~/bosco/local./condor_config.local` (`` is the short host name) and add a line like `NETWORK_INTERFACE = xxx.xxx.xxx.xxx` , +substituting `xxx.xxx.xxx.xxx` with the public IP address. This will tell BOSCO to use that address. ----## Modifying maximum number of submitted jobs to a resource + +#### Modifying maximum number of submitted jobs to a resource Many clusters limit the number of jobs that can be submitted to the scheduler. For PBS, we are able to detect this limit. For SGE and LSF, we are not able to detect this limit. In the cases where we cannot find the limit, we set the maximum number of jobs very conservatively, to a maximum of 10. This includes both the number of idle and running jobs to the cluster. -The limit is specified in the condor config file =~/bosco/local.bosco/condor_config.local=, at the bottom. Edit the value of the configuration variable =GRIDMANAGER_MAX_SUBMITTED_JOBS_PER_RESOURCE= +The limit is specified in the condor config file `~/bosco/local.bosco/condor_config.local`, at the bottom. Edit the value of the configuration variable `GRIDMANAGER_MAX_SUBMITTED_JOBS_PER_RESOURCE` -
-GRIDMANAGER_MAX_SUBMITTED_JOBS_PER_RESOURCE = %RED%10%ENDCOLOR%
-
+ GRIDMANAGER_MAX_SUBMITTED_JOBS_PER_RESOURCE = 10 ----## Custom submit properties -Bosco has the ability to add custom submit properties to every job submitted to a cluster. On the cluster's login node (the BOSCO resource, the host you used at the end of the line when typing the =bosco_cluster --add= command), create the file +#### Custom submit properties +Bosco has the ability to add custom submit properties to every job submitted to a cluster. +On the cluster's login node (the BOSCO resource, the host you used at the end of the line when typing the `bosco_cluster --add` command), +create the file in one of the custom locations: -#CustomScriptLocations - * *PBS/SLURM* - =~/bosco/glite/bin/pbs_local_submit_attributes.sh= - * *Condor* - =~/bosco/glite/bin/condor_local_submit_attributes.sh= - * *SGE* (and other GE) - =~/bosco/glite/bin/sge_local_submit_attributes.sh= - * *LSF* - =~/bosco/glite/bin/lsf_local_submit_attributes.sh= +##### Custom Script Locations -%IMPORTANT% This file is executed and the output is inserted into the submit script. I.e. It is not cat, use echo/cat statements in the script. + - *PBS/SLURM* - `~/bosco/glite/bin/pbs_local_submit_attributes.sh` + - *Condor* - `~/bosco/glite/bin/condor_local_submit_attributes.sh` + - *SGE* (and other GE) - `~/bosco/glite/bin/sge_local_submit_attributes.sh` + - *LSF* - `~/bosco/glite/bin/lsf_local_submit_attributes.sh` -Below is an example =pbs_local_submit_attributes.sh= script which will cause every job submitted to this cluster through Bosco to request 1 node with 8 cores: -
-#!/bin/sh
+!!! note
+     This file is executed and the output is inserted into the submit script. I.e. It is not cat, use echo/cat statements in the script.
 
-echo "#PBS -l nodes=1:ppn=8"
-
+Below is an example `pbs_local_submit_attributes.sh` script which will cause every job submitted to this cluster through Bosco to request 1 node with 8 cores: + + #!/bin/sh + echo "#PBS -l nodes=1:ppn=8" + +##### Passing parameters to the custom submit properties. ----### Passing parameters to the custom submit properties. You may also pass parameters to the custom scripts by adding a special parameter to the Bosco submit script. +For example, in your Bosco submit script, add: -For example, in your Bosco submit script, add:
-...
-%RED%+remote_cerequirements = NumJobs == 100%ENDCOLOR%
-...
-queue
-
+ ... + +remote_cerequirements = NumJobs == 100 + ... + queue -After you submit this job to Bosco, it will execute the [[#CustomScriptLocations][custom scripts]] with, in this example, =NumJobs= set in the environment equal to =100=. The custom script can take advantage of these values. For example, a PBS script can use the !NumJobs:
-#!/bin/sh
+After you submit this job to Bosco, it will execute the [custom scripts](#custom-script-locations) with,
+in this example, *NumJobs* set in the environment equal to *100*.
+The custom script can take advantage of these values.  For example, a PBS script can use the *$NumJobs*:
 
-echo "#PBS -l select=$NumJobs"
-
+ #!/bin/sh + echo "#PBS -l select=$NumJobs" + +This will set the number of requested cores from PBS to *$NumJobs* specified in the original Bosco Submit file. -This will set the number of requested cores from PBS to !NumJobs specified in the original Bosco Submit file. +##### Flocking to a BOSCO installation ----## Flocking to a BOSCO installation In some special cases you may desire to flock to your BOSCO installation. If you don't know what I'm talking about, then skip this section. In order to enable flocking you must use an IP so that all the hosts you are flocking from can communicate with the BOSCO host. -Then you must setup FLOCK_FROM and the security configuration so that the communications are authorized. +Then you must setup `FLOCK_FROM` and the security configuration so that the communications are authorized. BOSCO has strong security settings. Here are two examples: - 1 Using GSI authentication (a strong authentication method) you must provide and install X509 certificates, you must change the configuration %TWISTY{%TWISTY_OPTS_DETAILED% showlink="Click to see the configuration file" }%
#
-# Networking - If you did not already, remember that you need to set BOSCO not to use the loopback port
-#
-NETWORK_INTERFACE =
-
-#
-# Hosts definition
-#
-# BOSCO host
-H_BOSCO = %RED%bosco.mydomain.edu%ENDCOLOR%
-H_BOSCO_DN = %RED%/DC=com/DC=DigiCert-Grid/O=Open Science Grid/OU=Services/CN=bosco.mydomain.edu%ENDCOLOR%
-
-# submit host (flocking to BOSCO host)
-H_SUB = %RED%sub.mydomain.edu%ENDCOLOR%
-H_SUB_DN = %RED%/DC=com/DC=DigiCert-Grid/O=Open Science Grid/OU=Services/CN=sub.mydomain.edu%ENDCOLOR%
-
-#
-# Flocking configuration
-# 
-FLOCK_FROM = $(FLOCK_FROM) $(H_SUB)
-
-#
-# Security definitions
-# 
-# Assuming system-wide installed CA certificates 
-GSI_DAEMON_DIRECTORY = /etc/grid-security
-# This host's certificates
-GSI_DAEMON_CERT = /etc/grid-security/hostcert.pem
-GSI_DAEMON_KEY = /etc/grid-security/hostkey.pem
-# default GSI_DAEMON_TRUSTED_CA_DIR = $(GSI_DAEMON_DIRECTORY)/certificates
-CERTIFICATE_MAPFILE= $HOME/bosco/local.bosco/certs/condor_mapfile
-
-# Not used
-MY_DN = $(H_BOSCO_DN)
-
-# Who to trust?  Include the submitters flocking here
-GSI_DAEMON_NAME = $(GSI_DAEMON_NAME), $(H_BOSCO_DN), $(H_SUB_DN)
-
-# Enable authentication from the Negotiator
-SEC_ENABLE_MATCH_PASSWORD_AUTHENTICATION = TRUE
-
-# Enable gsi authentication, and claimtobe (for campus factories)
-# The default (unix) should be: FS, KERBEROS, GSI
-SEC_DEFAULT_AUTHENTICATION_METHODS = FS,GSI, PASSWORD, $(SEC_DEFAULT_AUTHENTICATION_METHODS)
-SEC_CLIENT_AUTHENTICATION_METHODS = FS, PASSWORD, GSI, CLAIMTOBE
-SEC_DAEMON_AUTHENTICATION_METHODS = FS, PASSWORD, GSI, CLAIMTOBE
-SEC_WRITE_AUTHENTICATION_METHODS = FS, PASSWORD, GSI, CLAIMTOBE
-SEC_ADVERTISE_SCHEDD_METHODS = FS, PASSWORD, GSI, CLAIMTOBE
-
-ALLOW_DAEMON = $(ALLOW_DAEMON) condor_pool@*/* %RED%boscouser%ENDCOLOR%@*/* $(FULL_HOSTNAME) $(IP_ADDRESS)
-ALLOW_ADVERTISE_SCHEDD = %RED%boscouser%ENDCOLOR%@*/*
-
%ENDTWISTY% and define or update the condor_mapfile (e.g. =$HOME/bosco/local.bosco/certs/condor_mapfile=) %TWISTY{%TWISTY_OPTS_DETAILED% showlink="Click to see the condor_mapfile" }%
#
-GSI "^%RED%\/DC\=com\/DC\=DigiCert\-Grid\/O\=Open\ Science\ Grid\/OU\=Services\/CN\=sub\.mydomain\.edu%ENDCOLOR%$" %RED%boscouser@sub.mydomain.edu%ENDCOLOR%
-GSI "^%RED%\/DC\=com\/DC\=DigiCert\-Grid\/O\=Open\ Science\ Grid\/OU\=Services\/CN\=bosco\.mydomain\.edu%ENDCOLOR%$" %RED%boscouser%ENDCOLOR%
-# #
-SSL (.*) ssl@unmapped
-CLAIMTOBE (.*) \1
-PASSWORD (.*) \1
-# #
-GSI (.*) anonymous
-FS (.*) \1
-
%ENDTWISTY% Remember to enable and configure GSI authentication also on the host you are flocking form. - 1 Relaxing BOSCO security setting to allow CLAIMTOBE authentication. This is not very secure. Use it only if you can trust all the machines on the network and remember to enable CLAIMTOBE also on the host you are flocking from %TWISTY{%TWISTY_OPTS_DETAILED% showlink="Click to see the configuration file" }%
#
-# Networking - If you did not already, remember that you need to set BOSCO not to use the loopback port
-#
-NETWORK_INTERFACE =
-
-#
-# Flocking configuration
-# 
-FLOCK_FROM = %RED%host_from.domain%ENDCOLOR%
-
-#
-# Security definitions overrides
-# 
-SEC_DEFAULT_ENCRYPTION = OPTIONAL
-SEC_DEFAULT_INTEGRITY = PREFERRED
-# To allow status read
-SEC_READ_INTEGRITY = OPTIONAL
-
-SEC_CLIENT_AUTHENTICATION_METHODS = FS, PASSWORD, CLAIMTOBE
-
-ALLOW_ADVERTISE_SCHEDD = */%RED%IP_of_the_host_in_flock_from%ENDCOLOR% $(FULL_HOSTNAME) $(IP_ADDRESS) $(ALLOW_DAEMON)
-
-SEC_DAEMON_AUTHENTICATION = PREFERRED
-SEC_DAEMON_INTEGRITY = PREFERRED
-SEC_DAEMON_AUTHENTICATION_METHODS = FS,PASSWORD,CLAIMTOBE
-SEC_WRITE_AUTHENTICATION_METHODS = FS,PASSWORD,CLAIMTOBE
-
%ENDTWISTY% - -After copying from the examples (click above to expand the example files) or editing your configuration file, save it as =$HOME/bosco/local.bosco/config/zzz_condor_config.flocking=. -Other names are OK as long as its definition override the default ones of BOSCO (check with =condor_config_val -config=). + +1. Using GSI authentication (a strong authentication method) you must provide and install X509 certificates, + you must change the **configuration file** (see example below) and define or update the **condor_mapfile**(see example below) + (e.g. `$HOME/bosco/local.bosco/certs/condor_mapfile`) + + **configuration file:** + + # Networking - If you did not already, remember that you need to set BOSCO not to use the loopback port + # + NETWORK_INTERFACE = + + # + # Hosts definition + # + # BOSCO host + H_BOSCO = %RED%bosco.mydomain.edu%ENDCOLOR% + H_BOSCO_DN = %RED%/DC=com/DC=DigiCert-Grid/O=Open Science Grid/OU=Services/CN=bosco.mydomain.edu%ENDCOLOR% + + # submit host (flocking to BOSCO host) + H_SUB = %RED%sub.mydomain.edu%ENDCOLOR% + H_SUB_DN = %RED%/DC=com/DC=DigiCert-Grid/O=Open Science Grid/OU=Services/CN=sub.mydomain.edu%ENDCOLOR% + + # + # Flocking configuration + # + FLOCK_FROM = $(FLOCK_FROM) $(H_SUB) + + # + # Security definitions + # + # Assuming system-wide installed CA certificates + GSI_DAEMON_DIRECTORY = /etc/grid-security + # This host's certificates + GSI_DAEMON_CERT = /etc/grid-security/hostcert.pem + GSI_DAEMON_KEY = /etc/grid-security/hostkey.pem + # default GSI_DAEMON_TRUSTED_CA_DIR = $(GSI_DAEMON_DIRECTORY)/certificates + CERTIFICATE_MAPFILE= $HOME/bosco/local.bosco/certs/condor_mapfile + + # Not used + MY_DN = $(H_BOSCO_DN) + + # Who to trust? Include the submitters flocking here + GSI_DAEMON_NAME = $(GSI_DAEMON_NAME), $(H_BOSCO_DN), $(H_SUB_DN) + + # Enable authentication from the Negotiator + SEC_ENABLE_MATCH_PASSWORD_AUTHENTICATION = TRUE + + # Enable gsi authentication, and claimtobe (for campus factories) + # The default (unix) should be: FS, KERBEROS, GSI + SEC_DEFAULT_AUTHENTICATION_METHODS = FS,GSI, PASSWORD, $(SEC_DEFAULT_AUTHENTICATION_METHODS) + SEC_CLIENT_AUTHENTICATION_METHODS = FS, PASSWORD, GSI, CLAIMTOBE + SEC_DAEMON_AUTHENTICATION_METHODS = FS, PASSWORD, GSI, CLAIMTOBE + SEC_WRITE_AUTHENTICATION_METHODS = FS, PASSWORD, GSI, CLAIMTOBE + SEC_ADVERTISE_SCHEDD_METHODS = FS, PASSWORD, GSI, CLAIMTOBE + + ALLOW_DAEMON = $(ALLOW_DAEMON) condor_pool@*/* %RED%boscouser%ENDCOLOR%@*/* $(FULL_HOSTNAME) $(IP_ADDRESS) + ALLOW_ADVERTISE_SCHEDD = %RED%boscouser%ENDCOLOR%@*/* + + **condor-mapfile:** + + GSI "^%RED%\/DC\=com\/DC\=DigiCert\-Grid\/O\=Open\ Science\ Grid\/OU\=Services\/CN\=sub\.mydomain\.edu%ENDCOLOR%$" %RED%boscouser@sub.mydomain.edu%ENDCOLOR% + GSI "^%RED%\/DC\=com\/DC\=DigiCert\-Grid\/O\=Open\ Science\ Grid\/OU\=Services\/CN\=bosco\.mydomain\.edu%ENDCOLOR%$" %RED%boscouser%ENDCOLOR% + # # + SSL (.*) ssl@unmapped + CLAIMTOBE (.*) \1 + PASSWORD (.*) \1 + # # + GSI (.*) anonymous + FS (.*) \1 + + Remember to enable and configure GSI authentication also on the host you are flocking form.

+ + +1. Relaxing BOSCO security setting to allow CLAIMTOBE authentication. This is not very secure. Use it only if you can trust all the machines + on the network and remember to enable CLAIMTOBE also on the host you are flocking from, see a **configuration file** example below: + + **configuration file:** + + # Networking - If you did not already, remember that you need to set BOSCO not to use the loopback port + # + NETWORK_INTERFACE = + + # + # Flocking configuration + # + FLOCK_FROM = %RED%host_from.domain%ENDCOLOR% + + # + # Security definitions overrides + # + SEC_DEFAULT_ENCRYPTION = OPTIONAL + SEC_DEFAULT_INTEGRITY = PREFERRED + # To allow status read + SEC_READ_INTEGRITY = OPTIONAL + + SEC_CLIENT_AUTHENTICATION_METHODS = FS, PASSWORD, CLAIMTOBE + + ALLOW_ADVERTISE_SCHEDD = */%RED%IP_of_the_host_in_flock_from%ENDCOLOR% $(FULL_HOSTNAME) $(IP_ADDRESS) $(ALLOW_DAEMON) + + SEC_DAEMON_AUTHENTICATION = PREFERRED + SEC_DAEMON_INTEGRITY = PREFERRED + SEC_DAEMON_AUTHENTICATION_METHODS = FS,PASSWORD,CLAIMTOBE + SEC_WRITE_AUTHENTICATION_METHODS = FS,PASSWORD,CLAIMTOBE + +After copying from the examples or editing your configuration file, save it as `$HOME/bosco/local.bosco/config/zzz_condor_config.flocking`. +Other names are OK as long as its definition override the default ones of BOSCO (check with `condor_config_val -config`). Then stop and restart BOSCO. %ENDSECTION{"BoscoAdvancedUseInstContent"}% %ENDSECTION{"BoscoAdvancedUse"}% ----# Troubleshooting - ----## Useful Configuration and Log Files -BOSCO underneath is using Condor. You can find all the Condor log files in =~/bosco/local.HOSTNAME/log= +#### Troubleshooting +###### Useful Configuration and Log Files +BOSCO underneath is using Condor. You can find all the Condor log files in `~/bosco/local.HOSTNAME/log` %STARTSECTION{"BoscoTroubleshootingItems"}% ----## Make sure that you can connect to the BOSCO host -If you see errors like:
- Installing BOSCO on user@osg-ss-submit.chtc.wisc.edu...
- ssh: connect to host osg-ss-submit.chtc.wisc.edu port 22: Connection timed out
- rsync: connection unexpectedly closed (0 bytes received so far) [sender]
- rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.6]
- ssh: connect to host osg-ss-submit.chtc.wisc.edu port 22: Connection timed out
- rsync: connection unexpectedly closed (0 bytes received so far) [sender]
- rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.6]
- ssh: connect to host osg-ss-submit.chtc.wisc.edu port 22: Connection timed out
- rsync: connection unexpectedly closed (0 bytes received so far) [sender]
- rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.6]
-
+ +##### Make sure that you can connect to the BOSCO host + +If you see errors like: + + Installing BOSCO on user@osg-ss-submit.chtc.wisc.edu... + ssh: connect to host osg-ss-submit.chtc.wisc.edu port 22: Connection timed out + rsync: connection unexpectedly closed (0 bytes received so far) [sender] + rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.6] + ssh: connect to host osg-ss-submit.chtc.wisc.edu port 22: Connection timed out + rsync: connection unexpectedly closed (0 bytes received so far) [sender] + rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.6] + ssh: connect to host osg-ss-submit.chtc.wisc.edu port 22: Connection timed out + rsync: connection unexpectedly closed (0 bytes received so far) [sender] + rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.6] + Please try manually to ssh from the BOSCO host to the cluster submit node. The ability to connect is required in order to install BOSCO. ----## Make sure that BOSCO is running +##### Make sure that BOSCO is running + BOSCO may not survive after you log out, for example if the BOSCO node was restarted while you where logged out. -When you log back in after sourcing the setup as described in the [[#SetupEnvironment][setup environment section]], you should start BOSCO as described in the [[#BoscoStart][BOSCO start section]], specially if the command =condor_q= is failing. +When you log back in after sourcing the setup as described in the [setup environment section](#setup-environment-before-using), +you should start BOSCO as described in the [BOSCO start section](#starting-bosco), specially if the command `condor_q` is failing. + +##### Errors due to leftover files ----## Errors due to leftover files Bosco files on the submit host are in: - * =~/bosco/= - the release directory - * =~/.bosco/= - some service files - * =~/.ssh/= - the ssh key used by BOSCO - -If you used =bosco_uninstall= it will remove all BOSCO related files. If you removed BOSCO by hand you must pay attention. -If the service key is still in =.ssh= but the other files are missing, during the execution of BOSCO commands you will get some unclear errors like -_"IOError: [Errno 2] No such file or directory: '/home/marco/.bosco/.pass'"_ , _"OSError: [Errno 5] Input/output error"_ , all followed by: -
Password-less ssh to marco@itb2.uchicago.edu did NOT work, even after adding the ssh-keys.
-Does the remote resource allow password-less ssh?
-
-If that happens you can remove the service files and the keys using:
rm -r ~/.bosco
-rm ~/.ssh/bosco_key.rsa*
-and then re-add all the clusters with =bosco_cluster --add=. + - `~/bosco/` - the release directory + - `~/.bosco/` - some service files + - `~/.ssh/` - the ssh key used by BOSCO + +If you used `bosco_uninstall` it will remove all BOSCO related files. If you removed BOSCO by hand you must pay attention. +If the service key is still in `.ssh` but the other files are missing, during the execution of BOSCO commands you will get some unclear errors like: + + - `_"IOError: [Errno 2] No such file or directory: '/home/marco/.bosco/.pass'"` or + - `"OSError: [Errno 5] Input/output error"` , all followed by: + +`Password-less ssh to marco@itb2.uchicago.edu did NOT work, even after adding the ssh-keys. +Does the remote resource allow password-less ssh?` + +If that happens you can remove the service files and the keys using: + + $ rm -r ~/.bosco + $ rm ~/.ssh/bosco_key.rsa + +and then re-add all the clusters with: + + $ bosco_cluster --add + +##### Unable to download and prepare BOSCO for remote installation. ----## Unable to download and prepare BOSCO for remote installation. BOSCO can return this error: - 1. Because the BOSCO submit host is unable to download BOSCO for the resource installation, e.g. a firewall is blocking the download or the server is down - 2. More commonly because there are problems with the login host of the BOSCO resource, e.g. the disk is full or there are multiple login nodes -You can check 1 byy downloading BOSCO on your BOSCO submit host. -To check 2 you have to login on the BOSCO resource: =df= will tell you you some disks are full, with =hostname -f= you can check if the name is different form the one that you used to login with ssh. If the name differs probably you are using a cluster with multiple login nodes and you must use only one for BOSCO. Se the second "IMPORTANT" note in the [[#AddResourceSection][section to add a cluster to BOSCO]] (above). - -If you see errors similar to the one below while executing ==bosco_cluster --add==: -
-Downloading for USER@RESOURCE
-Unpacking.tar: Cannot save working directory 
-tar: Error is not recoverable: exiting now 
-ls: /tmp/tmp.qeIJ9139/condor*: No such file or directory 
-Unable to download and prepare BOSCO for remote installation. 
-
-then you are using most likely the generic name of a multi-login cluster and you should use the name of one of the nodes as suggested in the [[#AddResourceSection][note above]]. + +1. Because the BOSCO submit host is unable to download BOSCO for the resource installation, e.g. a firewall is blocking the download or the server is down +1. More commonly because there are problems with the login host of the BOSCO resource, e.g. the disk is full or there are multiple login nodes + +You can check 1 by downloading BOSCO on your BOSCO submit host. +To check 2 you have to login on the BOSCO resource: `df` will tell you if some disks are full, with `hostname -f` you can check if the name + is different form the one that you used to login with ssh. If the name differs probably you are using a cluster with multiple login nodes and + you must use only one for BOSCO. Se the second **note** in the [section to add a cluster to BOSCO](#add-a-cluster-to-bosco) (above). + +If you see errors similar to the one below while executing `bosco_cluster --add`: + + Downloading for USER@RESOURCE + Unpacking.tar: Cannot save working directory + tar: Error is not recoverable: exiting now + ls: /tmp/tmp.qeIJ9139/condor*: No such file or directory + Unable to download and prepare BOSCO for remote installation. + +then you are using most likely the generic name of a multi-login cluster and you should use the name of one of the nodes + as suggested in the [note above](#add-a-cluster-to-bosco). %ENDSECTION{"BoscoTroubleshootingItems"}% ----# Get Help/Support +#### Get Help/Support + To get assistance you can send an email to bosco-discuss@opensciencegrid.org %STARTSECTION{"BoscoReferences"}% ----# References -[[http://bosco.opensciencegrid.org/][BoSCO Web site]] and documents about the latest production release (v1.2) - * [[BoSCO][Using Bosco]] - * [[BoscoInstall][Installing BoSCO]] - * [[BoscoMultiUser][Installing BoSCO Multi User]] - * [[BoscoQuickStart][Quick start guide to Bosco]] - * BoscoR - -Campus Grids related documents: - * https://twiki.grid.iu.edu/bin/view/CampusGrids - * https://twiki.grid.iu.edu/bin/view/Documentation/CampusFactoryInstall + +#### References + +[BoSCO Web site](http://bosco.opensciencegrid.org/) and documents about the latest production release (v1.2) + + - [Using Bosco](#how-to-use) + - [Installing BoSCO](BoscoInstall) + - [Installing BoSCO Multi User](BoscoMultiUser) + - [Quick start guide to Bosco](BoscoQuickStart) + - [BoscoR](BoscoR) Condor documents: - * Condor manual: http://research.cs.wisc.edu/condor/manual/ + + - Condor manual: http://research.cs.wisc.edu/condor/manual/ How to submit Condor jobs: - * Tutorial: http://research.cs.wisc.edu/condor/tutorials/alliance98/submit/submit.html - * Condor manual: http://research.cs.wisc.edu/condor/manual/v7.6/2_5Submitting_Job.html + + - Tutorial: http://research.cs.wisc.edu/condor/tutorials/alliance98/submit/submit.html + - Condor manual: http://research.cs.wisc.edu/condor/manual/v7.6/2_5Submitting_Job.html Developers documents: - * [[TestBoSCO][BOSCO tests for developers and testers]] - * [[BoscoRoadmap][BOSCO Roadmap (planned and desired features)]] + + - [BOSCO tests for developers and testers](TestBoSCO) + - [BOSCO Roadmap (planned and desired features)](BoscoRoadmap) Here you can check out older releases: - * [[BoSCOv0][BOSCO version 0]] - * [[BoSCOv1][BOSCO version 1]] - * [[BoSCOv1p1][BOSCO version 1.1]] - * [[BoSCOv1p2][BOSCO version 1.2]] - * CICiForum130418 + - [BOSCO version 0](BoSCOv0) + - [BOSCO version 1](BoSCOv1) + - [BOSCO version 1.1](BoSCOv1p1) + - [BOSCO version 1.2](BoSCOv1p2) + - [CICiForum](CICiForum130418) diff --git a/docs/BoscoQuickStart.md b/docs/BoscoQuickStart.md new file mode 100644 index 0000000..b0074ff --- /dev/null +++ b/docs/BoscoQuickStart.md @@ -0,0 +1,346 @@ +%META:TOPICINFO{author="KyleGross" date="1481047997" format="1.1" +version="1.17"}% %META:TOPICPARENT{name="BoSCO"}% + +# Bosco Quick Start Guide + + + + +\---\# Introduction This is a quick introduction to your personal Bosco +installation. Check the [main document](BoSCO) to find out more about +what Bosco is and how it works. Check the [Install +document](BoscoInstall) for detailed installation requirements and all +the installation options. + +`bosco_quickstart` is a script that installs and sets up Bosco. The +script will take care of: + +1. Installing Bosco in `~/bosco`, if Bosco is not already installed +2. Connecting one cluster (submit host) with Bosco +3. Sending a test job to verify that the cluster is configured + correctly and is working + +And, `bosco_quickstart` will also suggest what to use to submit jobs. + +Bosco is a tool to connect you to the resources you already have access +to. Bosco connects to a **submit host**. The submit host must be a +machine on which you can log in, for example with **`ssh +!username@submit.mycluster.mydomain`**. Given the ability to log in, you +can submit jobs to a local PBS, HTCondor, LSF or SGE queue manager, for +example using commands like **`qsub`** and **`condor_submit`**. In this +document and in Bosco, we use the term **cluster**, because the submit +host normally gives access to a set of worker nodes by way of a queue (a +[traditional cluster](http://en.wikipedia.org/wiki/Computer_cluster)), +but sometimes it may give access to multiple resources where jobs can be +queued. A cluster could even be a gateway to Cloud and Grid resources +(see OSG-XSEDE in the Support section below). + + If you have a previous version +of Bosco installed and you'd like to install Bosco 1.2, then you should +first remove that previous version with **`source ~/bosco/bosco_setenv; +bosco_uninstall --all`**. + +\---\# Install, set up Bosco, and connect to one cluster + +With a single command you will install, set up, and connect Bosco to one +cluster. + +Download Bosco [from this download +page](http://bosco.opensciencegrid.org/download/). We recommend the +Quickstart Multi-Platform installer. If you prefer to work in a terminal +window, you can also copy the URL that will be printed in the [download +page](http://bosco.opensciencegrid.org/download/) and use cURL: + +``` screen +%UCL_PROMPT% curl -o bosco_quickstart.tar.gz ftp://ftp.cs.wisc.edu/GET_THE_URL_FROM_THE_PAGE/bosco_quickstart.tar.gz +``` + +Untar and invoke the **`bosco_quickstart`** script from a terminal with +a current working directory of the \~/Download folder or the folder in +which you saved the file: + +``` screen +%UCL_PROMPT% tar xvzf ./bosco_quickstart.tar.gz +%UCL_PROMPT% ./bosco_quickstart +``` + +If Bosco was not previously installed, when asked "Do you want to +install Bosco? Select y/n and press \[ENTER\])", answer "y" and press +ENTER to confirm that you'd like to continue installing Bosco. The +script will print information while downloading and installing Bosco. + +The script will then ask you three questions about the cluster you want +to connect to Bosco. Follow each answer by pressing ENTER: + + - "Type the cluster name and press \[ENTER\]:" This is the full name + of the cluster login node, for example `submit.mycluster.mydomain` + - "Type your name at CLUSTER\_NAME (default CURRENT\_NAME) and press + \[ENTER\]:" This is the `username` that you use to log in on the + cluster you want to connect. If you leave this empty Bosco will + assume that it is the same as your current username on the Bosco + host + - "Type the queue manager for CLUSTER\_NAME (pbs, condor, lsf, sge, + slurm) and press \[ENTER\]" This is the program that is used to + manage the jobs in the cluster you want to connect.Examples of such + programs are PBS, HTCondor, LSF, and SGE. If you don't know what is + installed, ask the system administrator of the cluster. + +You will then be prompted for the password that you use when logging in +to the cluster that you are connecting to Bosco. If you are using ssh +keys to connect to that cluster please see the NOTE (1) below. + +%TWISTY\_OPTS\_OUTPUT% showlink="Click +to see an example of a quickstart execution:" + +``` screen +%UCL_PROMPT% ./bosco_quickstart +Bosco Quickstart +detailed logging is in ./bosco_quickstart.log + +Bosco is not installed. You need Bosco to run this quickstart. +Do you want to install Bosco? Select y/n and press [ENTER]): y +************** Downloading and Installing Bosco *********** +Installing BOSCO....... +BOSCO Installed +************** Starting Bosco: *********** +BOSCO Started +************** Connecting one cluster (resource) to BOSCO: *********** +At any time hit [CTRL+C] to interrupt. + +Type the submit host name for the Bosco resource and press [ENTER]: uc3-sub.uchicago.edu +Type your username on uc3-sub.uchicago.edu (default mmb) and press [ENTER]: +Type the queue manager for uc3-sub.uchicago.edu (pbs, condor, lsf, sge, slurm) and press [ENTER]: condor +Adding uc3-sub.uchicago.edu, user: mmb, queue manager: condor +mmb@uc3-sub.uchicago.edu's password: +............................................................... +uc3-sub.uchicago.edu added +************** Testing the cluster (resource): *********** +This may take up to 2 minutes... please wait.......................................................................... +BOSCO on uc3-sub.uchicago.edu Tested +************** Congratulations, Bosco is now setup to work with uc3-sub.uchicago.edu! *********** +You are ready to submit jobs with the "condor_submit" command. +Remember to setup the environment all the time you want to use Bosco: +source ~/bosco/bosco_setenv + +Here is a quickstart guide about Bosco: +https://twiki.grid.iu.edu/bin/view/CampusGrids/BoscoQuickStart + +To remove Bosco you can run: +source ~/bosco/bosco_setenv; bosco_uninstall --all + +Here is a submit file example (supposing you want to run "myjob.sh"): +universe = grid +Executable = myjob.sh +arguments = +output = myjob.output.txt +error = myjob.error.txt +log = myjob.log +transfer_output_files = +should_transfer_files = YES +when_to_transfer_output = ON_EXIT +queue 1 +``` + + + + (1) You must be able to login to +the remote cluster. If password authentication is OK, the script will +ask you for your password. If key only login is allowed, then you must +load your key in the `ssh-agent`. Here is an example adding the key and +testing the login: +%TWISTY\_OPTS\_DETAILED% \
  eval
+\`ssh-agent\` Agent pid 17103;
+ ssh-add id\_rsa\_bosco
+Enter passphrase for id\_rsa\_bosco: Identity added: id\_rsa\_bosco
+(id\_rsa\_bosco)  ssh
+ Last login: Thu Sep 13 13:49:33 2012 from
+uc3-bosco.mwt2.org $ logout \
+
+
+ (2) Some clusters have multiple
+login nodes behind a round robin DNS server. You can recognize them when
+you log in to the node (for example: **`ssh login.mydomain.org`**), as
+it will show a name different from the one used to connect (for example,
+**`hostname -f`** returns **`login2.mydomain.org`**). If this happens
+you must connect the Bosco resources by using a name of the host, not
+the DNS alias (for example: **`bosco_cluster --add
+login2.mydomain.org`**). This is because sometimes these multiple login
+nodes do not share all the directories and Bosco may be unable to find
+its files if different connections land on different hosts. The
+following example shows this:
+%TWISTY\_OPTS\_DETAILED%  Note
+how `midway-login2.rcc.uchicago.edu` must be used instead of
+`midway.rcc.uchicago.edu`: \
+ ssh
+%
+ password:
+**`===========================================================================`**
+Welcome to Midway Research Computing Center University of Chicago ...
+**`===========================================================================`**
+
+\[ \~\]$ hostname -f
+%RED%midway-login2.rcc.uchicago.edu
+\[ \~\]$ logout Connection to midway.rcc.uchicago.edu
+closed.  marco$
+bosco\_cluster --add %
+Warning: No batch system specified, defaulting to PBS If this is
+incorrect, rerun the command with the batch system specified
+
+Enter the password to copy the ssh keys to
+:
+ password: Detecting PBS cluster
+configuration...bash: cannot set terminal process group (-1): Invalid
+argument bash: no job control in this shell bash: qmgr: command not
+found Done\! Downloading for ....
+Unpacking......... Sending libraries to
+... Creating BOSCO for the
+WN's....................................................................
+Installation complete The cluster 
+has been added to BOSCO It is available to run jobs submitted with the
+following values: \> universe = grid \> grid\_resource = batch pbs
+
+ \
+
+
+Once the quickstart script completes successfully, you can remove both
+the script and the log file:
+
+``` screen
+%UCL_PROMPT% rm ./bosco_quickstart ./bosco_quickstart.log
+```
+
+These directions will give you a personal installation of Bosco,
+connected to a single resource. **To learn how to add more resources and
+how to customize your Bosco installation, see the [full install
+guide](BoscoInstall).**
+
+\---\#\# How to use Bosco \#SetupEnvironment Since Bosco is not
+installed in the system path, a file containing environment variables
+must be sourced to set the variables for all Bosco commands, such as
+start, stop, job submission and query. Source the file with:
+
+``` screen
+%UCL_PROMPT% source ~/bosco/bosco_setenv
+```
+
+\---\# How to submit a job After installation, jobs may be submitted.
+Submission is demonstrated with a simple example. For more examples and
+options, check the [main Bosco document](BoSCO).
+
+Job submission with Bosco incorporates 3 items: 1. Have an executable 1.
+Prepare a submit description file 1. Submit the job
+
+\---\#\# The Executable This example uses a bash script, called
+`myjob.sh`: \
 \#/bin/bash
+
+\# Prepare for execution
+
+\# Run the actual applications hostname date id whoami
+
+\# Final steps
+
+\
+
+Make sure that the file has permissions set such that it is executable:
+\
 chmod
++x myjob.sh\
+
+You may wish to run this script locally to test it and to see the
+output: \

+./myjob.sh\
+
+\---\#\# The Submit Description File This submit description file will
+submit the script as a job to the remote cluster, using the HTCondor
+grid universe. Put the following into a file called `example.sub`:
+
+``` file
+universe = grid
+executable = myjob.sh
+arguments = 
+output = output
+error = error
+log = myjob.log
+transfer_output_files = 
+should_transfer_files = YES
+when_to_transfer_output = ON_EXIT
+queue 1
+```
+
+This sample file contains HTCondor commands with nothing to the right of
+the `=` sign. These are commands that are not used in this example, but
+are included such that this sample file may be more easily used by
+modification on further, more complex examples.
+
+\---\#\# Job Submission Submit the job in the file `example.sub` with
+the `condor_submit` command: \
+ condor\_submit example.sub
+\
+
+\---\#\# Job Monitoring Monitor the job with `condor_q`. This example
+shows the job in the idle state: \
+ condor\_q
+
+\-- Submitter: uc3-c001.mwt2.org : \<10.1.3.101:45876\> :
+uc3-c001.mwt2.org ID OWNER SUBMITTED RUN\_TIME ST PRI SIZE CMD 12.0
+marco 7/6 13:45 0+00:00:00 I 0 0.0 short2.sh 10
+
+1 jobs; 0 completed, 0 removed, 1 idle, 0 running, 0 held, 0 suspended
+\
+
+The job will be in the idle state if it is currently idle at the remote
+cluster. When the job is being executed on the remote cluster, the `ST`
+(State) will be `R`, and the `RUN_TIME` will grow.
+
+Another method for monitoring a job is to check the job's `log`, a human
+readable, chronologically ordered set of events relating to the job. The
+`log` for this example was specified in the submit description file as
+named `myjob.log`. You can view the file by using `cat`: \
  cat
+**.log** \
+
+\---\#\# Job output Once the job completes, Bosco will transfer back to
+the local machine standard output, standard error and any output files
+specified in the submit description file. The example did not specify
+any; they would be listed within the `transfer_output_files` command.
+These files will be placed in the current working directory, as set when
+the `condor_submit` command was executed.
+
+\---\# What's next Modify the example submit description file to specify
+your job.
+
+In further documentation, Bosco jobs are the same as HTCondor jobs. More
+features of job submission are described on these pages about HTCondor
+jobs:
+
+  - Tutorial:
+    
+  - Condor manual:
+    
+
+To learn more about Bosco (v1.2), check these pages:
+
+  - [BoSCO Web site](http://bosco.opensciencegrid.org/)
+  - [Using Bosco](BoSCO)
+  - [Installing BoSCO](BoscoInstall) - Full installation and set up
+    guide (includes adding more resources, sending custom job
+    attributes, ...)
+  - [Installing BoSCO Multi User](BoscoMultiUser) - Have your site
+    administrator install Bosco for all the users on the submit node.
+  - [Bosco tests for developers and testers](TestBoSCO) - If you want to
+    learn more and help us.
+
+\---\# Get Help/Support For further assistance, you can send an email to
+ .
+
+[Open Science Grid](http://www.opensciencegrid.org/) is a collaboration
+that can provide access to opportunistic computing cycles. To access OSG
+via the [OSG-XSEDE
+gateway](https://www.opensciencegrid.org/bin/view/Trash/Trash/VirtualOrganizations/OSGasXsedeSp),
+you can write a request to  .
+Once you have access to osg-xsede, you can connect to it using Bosco.
+
+# Comments
+
+
diff --git a/docs/BoscoR.md b/docs/BoscoR.md
new file mode 100644
index 0000000..ad0d271
--- /dev/null
+++ b/docs/BoscoR.md
@@ -0,0 +1,115 @@
+%META:TOPICINFO{author="KyleGross" date="1476284786" format="1.1"
+version="1.15"}% %META:TOPICPARENT{name="BoSCO"}% \\
+
+# Bosco-R Instructions
+
+ This is a alpha document. If you
+have any issues with installing and running Bosco or GridR, please email
+
+
+Bosco-R is the transparent integration of the [R
+statistics](http://www.r-project.org/) computing environment with
+[Bosco](http://bosco.opensciencegrid.org/).
+
+
+
+## Requirements
+
+1.  Installation of R on your submit machine. You can find binaries on
+    the [R website](http://cran.rstudio.com/)
+
+## Installation Instructions
+
+Installation is a three step process:
+
+1.  Install Bosco
+2.  Connect your cluster to Bosco
+3.  Install the Bosco+GridR package.
+
+Both 1 and 2 (the installation of Bosco and connecting your cluster) are
+covered in the Bosco [quick start
+guide](https://twiki.grid.iu.edu/bin/view/Trash/Trash/CampusGrids/BoscoQuickStart),
+please use that guide to install and configure Bosco. This document will
+only cover the installation of the Bosco+GridR package.
+
+### Install Modified GridR
+
+ This section will be greatly
+simplified once the modifications to GridR have been sent upstream.
+
+ Official documentation for
+installing GridR can be found on GridR's
+[wiki](https://github.com/osg-bosco/GridR/wiki/Installing-GridR).
+
+Download
+[GridR](https://www.dropbox.com/s/9yb50t07bn111xd/GridR_0.9.7.tar.gz).
+Install with the command line:
+
+``` file
+%UCL_PROMPT% R CMD INSTALL --build Downloads/GridR_0.9.7.tar.gz 
+```
+
+## Using BoscoR
+
+### A simple example.
+
+Be sure to have bosco running. To ensure that Bosco is running, you can
+run, from the command line:
+
+``` screen
+%UCL_PROMPT% source ~/bosco/bosco_setenv;
+%UCL_PROMPT% bosco_start
+```
+
+Start R, or RStudio, or whatever R environment you prefer. First, you
+need to load the GridR library:
+
+``` screen
+> library("GridR")
+```
+
+Then, you need to initialize GridR to use Bosco:
+
+``` screen
+> grid.init(service="bosco.direct", localTmpDir="tmp")
+```
+
+Now you are ready to create a function and send it to the connected
+cluster. Below is a very simple function that will double the input
+value:
+
+``` screen
+> a <- function(s) { return (2*s) }
+```
+
+Now, you can send it to the remote cluster.
+
+``` screen
+> grid.apply("x", a, 13)
+starting bosco.direct mode
+```
+
+You can check on the status of the job with the command
+grid.printJobs():
+
+``` screen
+> grid.printJobs()
+```
+
+Once the job has completed, the output value from the function `a` will
+be assigned the the variable `x`.
+
+``` screen
+> x
+[1] 26
+```
+
+%META:FILEATTACHMENT{name="Rbosco.png" attachment="Rbosco.png" attr=""
+comment="" date="1372098727" path="R\&bosco.png" size="94331"
+stream="R\&bosco.png" tmpFilename="/usr/tmp/CGItemp1407"
+user="DerekWeitzel" version="1"}%
+%META:FILEATTACHMENT{name="BoscoR-pic.png" attachment="BoscoR-pic.png"
+attr="" comment="" date="1372296222" path="BoscoR-pic.png" size="87708"
+stream="BoscoR-pic.png" tmpFilename="/usr/tmp/CGItemp1567"
+user="DerekWeitzel" version="1"}%
diff --git a/docs/BoscoRoadmap.md b/docs/BoscoRoadmap.md
new file mode 100644
index 0000000..6297e0d
--- /dev/null
+++ b/docs/BoscoRoadmap.md
@@ -0,0 +1,24 @@
+%META:TOPICINFO{author="KyleGross" date="1481047997" format="1.1"
+version="1.5"}% %META:TOPICPARENT{name="BoSCO"}%
+
+# BOSCO Roadmap
+
+
+
+
+\---\# Introduction
+
+This document contains the roadmap for the [BOSCO](BoSCO) project.
+
+\---\# Features
+
+\---\# Releases
+
+\---\# Get Help/Support To get assistance you can send an email to
+
+
+BoscoInstall
+
+# Comments
+
+
diff --git a/docs/CICiForum130418.md b/docs/CICiForum130418.md
new file mode 100644
index 0000000..cf27e86
--- /dev/null
+++ b/docs/CICiForum130418.md
@@ -0,0 +1,392 @@
+%META:TOPICINFO{author="KyleGross" date="1476284786" format="1.1"
+version="1.5"}% %META:TOPICPARENT{name="BoscoInstall"}%
+
+# Bosco + SkeletonKey for High Throughput R Applications
+
+Here is an example using BOSCO and SkeletonKey with the OASIS software
+service to run distributed high-throughput R-applications on campus grid
+environments.
+
+  - First we'll be installing BOSCO as shown in
+    Trash/CampusGrids.BoscoAHM13
+  - Then we'll install SkeletonKey and use it to run R as seen in
+    [Trash/CampusGrids.Quickstart](Trash/CampusGrids.Quickstart) and
+    [Trash/CampusGrids.SoftwareAndDataAccess](Trash/CampusGrids.SoftwareAndDataAccess).
+
+# Getting Started
+
+You will need login in your host, be able to share a `public_html`
+directory and have login access to a remote cluster.
+
+You will need also access to Web proxy (e.g. a squid proxy server). The
+closer this is to your cluster, the more efficient will be your jobs.
+Several campuses provide one (check with your network administrators).
+If you want to install one OSG provides a package and
+[instructions](Documentation/Release3.InstallFrontierSquid). For this
+tutorial you can also use the OSG ITB proxy server, even if it will not
+be very efficient if you are far form Chicago\!
+
+In this tutorial I will be using `bash` shell. If `echo $SHELL` returns
+something different from `/bin/bash` then run `/bin/bash` to start a
+Bash session. This is a very abbreviated install document for Bosco. For
+the full install document, view [Bosco Installer](BoscoInstall). And for
+more information on data transfer see SkeletonKey.
+
+ You will need to install Bosco
+and SkeletonKey on a RedHat (Or CentOS or Scientific Linux) computer. It
+must also not have HTCondor already running.
+
+# Let's start with Bosco
+
+## Download & Install Bosco
+
+ 
+     Bosco Download
+ 
+
+Visit the Bosco [download](http://bosco.opensciencegrid.org/download/)
+page. Choose the Multi-Platform Installer. After downloading the
+installer, from the terminal, untar it and run the installer as a
+regular user:
+
+``` screen
+%UCL_PROMPT% tar xzf boscoinstaller.tar.gz
+%UCL_PROMPT% python boscoinstaller
+```
+
+## Starting Bosco & adding your first cluster
+
+First you will need to setup your environment to have Bosco installed:
+
+``` screen
+%UCL_PROMPT% source ~/bosco/bosco_setenv
+```
+
+Start Bosco:
+
+``` screen
+%UCL_PROMPT% bosco_start
+```
+
+Add your first cluster. You will need your username and password from
+the sheet.
+
+``` screen
+%UCL_PROMPT% bosco_cluster --add demo%RED%XX%ENDCOLOR%@boscopbs.opensciencegrid.org pbs
+```
+
+## Test the new Cluster
+
+In order to confirm everything is working with the remote cluster, you
+may want to test the Bosco cluster.
+
+``` screen
+%UCL_PROMPT% bosco_cluster -t demo%RED%XX%ENDCOLOR%@boscopbs.opensciencegrid.org
+```
+
+# Now setup SkeletonKey
+
+## Download and Install
+
+SkeletonKey uses a python script to install and set things up for the
+user. The installation procedure is as outlined below:
+
+1.  First download the SkeletonKey installer script \
+
+ wget
+uc3-data.uchicago.edu/sk/install-skeletonkey.py \
+
+1.  Pick a directory to install the CCTools and SkeletonKey binaries in
+    (e.g. `bin` in your home directory). Ideally this directory should
+    be in `$PATH`. \
+
+ mkdir \~/bin \
+
+1.  Pick a directory to export from Chirp (for now the tutorial will use
+    /tmp/%RED%your\_user where
+    your\_user is your username)
+2.  Run the installer, specifying the directory to install and the
+    directory to export from Chirp: \
+
+ python
+install-skeletonkey.py -b \~/bin -e
+/tmp/%RED%your\_user \
+
+1.  Add the directory specified in `-b` option (e.g. `~/bin`) to
+    `$PATH`: \
+
+ export PATH=$PATH:\~/bin
+\
+
+1.  Edit `~/.profile` and append the following line: \
export PATH=$PATH:\~/bin\
+
+## Setup and start Chirp
+
+Add `chirp` in your path:\
+ cd bin/
+ ln -s
+cctools-3.7.1-x86\_64-redhat5/bin/chirp chirp \
+
+Create your data directories (the root directory has to be the same that
+you used in the -e option during the SkeletonKey installation above):
+\
 
+mkdir
+/tmp/%RED%your\_user\_name
+ cd
+/tmp/%RED%your\_user\_name
+ mkdir data output
+ cd \
+
+Start the chirp server: \
+ chirp\_control start
+\
+
+## Test SkeletonKey
+
+Here is a test that will just go through the mechanics of running
+skeleton key and generating a job wrapper.
+
+### Setting up binaries
+
+In order for the job wrapper that SkeletonKey provides to work
+correctly, you'll need to make the cctools binaries available on a
+webserver. The SkeletonKey installer created a file called
+`parrot.tar.gz` in the `~/bin` that you'll need to copy to your
+webserver and make it available over http: \
+ cp \~/bin/parrot.tar.gz
+%RED%\~/public\_html
+ chmod 644
+%RED%\~/public\_html/parrot.tar.gz
+\
+
+### Creating the job wrapper
+
+You'll need to do the following on the machine where you installed
+SkeletonKey
+
+1.  Open a file called `sk_test.ini` and add the following lines: \
+
+\[Parrot\] location =
+%RED%http://your.host/\~your\_user/parrot.tar.gz
+
+\[Application\] script = /bin/hostname \
+
+1.  In `sk_test.ini`, change the url
+    `http://your.host/~your_name/parrot.tar.gz` to point to the url of
+    the parrot tarball that you copied previously.
+2.  Run SkeletonKey on `sk_test.ini`: \
+
+ skeleton\_key -c
+sk\_test.ini \ 1. Finally, run the job wrapper to verify that
+it's working correctly \
+ sh ./job\_script.sh
+--2013-04-18 12:48:54--
+ Resolving
+uc3-test.uchicago.edu... 128.135.158.156 Connecting to
+uc3-test.uchicago.edu|128.135.158.156|:80... connected. HTTP request
+sent, awaiting response... 200 OK Length: 10488915 (10M)
+\[application/x-gzip\] Saving to: \`parrot.tar.gz'
+
+100%\[=====================================================================================================\>\]
+10,488,915 --.-K/s in 0.02s
+
+2013-04-18 12:48:54 (569 MB/s) - \`parrot.tar.gz' saved
+\[10488915/10488915\]
+
+uc3-test.uchicago.edu \
+
+In the ini file used by SkeletonKey, two sections are used. The
+SkeletonKey used the `location` setting in the `Parrot` section to
+determine where it can download Parrot binaries to use when running user
+applications. In the `Application` section, the `script` setting
+indicates the command to run in the Parrot environment.
+
+# Now submit the R jobs
+
+The next example will create a job that will read and write from a
+filesystem exported by Chirp using an application that's available using
+OASIS (or any other CVMFS repository). The specific example runs a R
+script elaborating a raster image but you can easily change it.
+
+OASIS is the OSG Application Software Installation Service. For more
+information on OASIS see ReleaseDocumentation.OasisUpdateMethod and to
+install software in OASIS (you need to be a VO software manager) contact
+
+
+R has been installed in the OSG VO space on OASIS and is available at
+the following paths:
+
+  - RHEL5 64bit: `sw/R/rhel5/x86_64/current/`
+  - RHEL6 64bit: `sw/R/rhel6/x86_64/current/`
+
+ Before you start, please make
+sure that Chirp is installed and exporting a directory (this tutorial
+will assume that Chirp is exporting
+=/tmp/%RED%your\_user\_name)
+
+## Creating the application tarball
+
+Since we'll be running an application from OASIS, we'll include in the
+application tarball a script to do some initial setup and then invoke
+the actual application
+
+1.  Create a directory for the script \
+
+ mkdir /tmp/rjob\_test
+\
+
+1.  Create a R script `/tmp/rjob_test/test.R` with the following lines:
+    \
+
+\#/usr/bin/Rscript --vanilla
+
+library( raster) args \<- commandArgs(TRUE) grbFile \<- args\[1\]
+scanHowMany \<- args\[2\] output \<- args\[3\] grb \<- brick( grbFile)
+
+for( n in 1:scanHowMany) { r \<- subset( grb, n) cat( paste( names( r),
+cellStats( r, "sum"), sep= " "), "\\n", file=output) } \
+
+1.  Create a shell script , `/tmp/rjob_test/myapp.sh` setting up the
+    environment and then running R (as visible above `test.R` requires 3
+    arguments: a raster file, `data.grb`, the number of iterations,
+    `100`, and the output file): \
+
+ROOT\_DIR=/cvmfs/uc3.uchicago.edu/sw export
+LD\_LIBRARY\_PATH=$LD\_LIBRARY\_PATH:$ROOT\_DIR/lib
+$ROOT\_DIR/bin/Rscript ./rjob\_test/test.R ./rjob\_test/data.grb 100
+$CHIRP\_MOUNT/output/$1 echo "Finishing script at: " echo \`date\`
+\
+
+1.  Next, make sure the `myapp.sh` script is executable and create a
+    tarball: \
+
+ chmod 755
+/tmp/rjob\_test/myapp.sh  cd
+/tmp  tar cvzf
+rjob\_test.tar.gz rjob\_test \ 1. Then copy the tarball to your
+webserver \
+ cd /tmp
+ cp rjob\_test.tar.gz
+%RED%\~/public\_html/
+ chmod 644
+%RED%\~/public\_html/rjob\_test.tar.gz
+ \ 1. Finally, copy or
+download the CVMFS repository key and make this available on your Web
+server. If it is already available on a public server that you trust,
+you can use directly that URL
+
+  - For OASIS the key is at
+    `http://uc3-test.uchicago.edu/~testu1/oasis.opensciencegrid.org.pub`:
+    \
+
+\-----BEGIN PUBLIC KEY-----
+MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAqQGYXTp9cRcMbGeDoijB
+gKNTCEpIWB7XcqIHVXJjfxEkycQXMyZkB7O0CvV3UmmY2K7CQqTnd9ddcApn7BqQ
+/7QGP0H1jfXLfqVdwnhyjIHxmV2x8GIHRHFA0wE+DadQwoi1G0k0SNxOVS5qbdeV
+yiyKsoU4JSqy5l2tK3K/RJE4htSruPCrRCK3xcN5nBeZK5gZd+/ufPIG+hd78kjQ
+Dy3YQXwmEPm7kAZwIsEbMa0PNkp85IDkdR1GpvRvDMCRmUaRHrQUPBwPIjs0akL+
+qoTxJs9k6quV0g3Wd8z65s/k5mEZ+AnHHI0+0CL3y80wnuLSBYmw05YBtKyoa1Fb
+FQIDAQAB -----END PUBLIC KEY----- \
+
+  - For UC3's CVMFS the key is
+    `http://uc3-data.uchicago.edu/uc3.key`:\
+
+\-----BEGIN PUBLIC KEY-----
+MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA3tsg79ghhbxquF3m7oQ3
+A7D77TfafuU2qK5SfW/HeERmBSfWdTNagygNhUjK1rROCmqekz3lnn25hma2Qodz
+9W3oqbQHRdCT/MTPLpcTl/n12fCtjMDHPfclnc39gu6uPGkRU21DHCusPaznMtGL
+hvwa3qSsi646UTaqKvD0dsUFVnUbVaG+XTi5jSQMHRTaGy1JCZBpVDMrMIgZzwDp
+9TLy1/5VEejBtYBt2rpV09IieurmA2T4Wsa+7zPUazPx2g+xsMyQ3fCu1fP7oszx
+JVbEnNXWhtuZ4R/1DrebXtojtrj6oc2bGlN92UDdthtC1/gE80Kc8tONfQt4P1Ea
+KQIDAQAB -----END PUBLIC KEY----- \
+
+One thing to note here is that Parrot makes mounted CVMFS repositories,
+like OASIS, available under `/cvmfs/repository_name` where
+repository\_name is replaced by the name that the repository is
+published under (see the configuration file `rjob_test.ini` below).
+
+### Creating a job wrapper
+
+You'll need to do the following on the machine where you installed
+SkeletonKey
+
+1.  Open a file called `rjob_test.ini` and add the following lines:
+    \
+
+\[CVMFS\] repo1 = uc3.uchicago.edu repo1\_options =
+url=http://uc3-cvmfs.uchicago.edu/opt/uc3/,pubkey=%RED%http://repository\_key\_url,quota\_limit=1000,proxies=%RED%squid-proxy:3128
+repo1\_key =
+%RED%http://repository\_key\_url
+repo2 = oasis.opensciencegrid.org repo2\_options =
+url=http://oasis-replica.opensciencegrid.org/cvmfs/oasis/,pubkey=%RED%http://repository\_key\_url,quota\_limit=1000,proxies=%RED%squid-proxy:3128
+repo2\_key =
+%RED%http://repository\_key\_url
+
+\[Directories\] export\_base =
+/tmp/%RED%your\_user read = /,
+data write = /, output
+
+\[Parrot\] location =
+%RED%
+
+\[Application\] location =
+%RED% script =
+./rjob\_test/myapp.sh \
+
+1.  In `rjob_test.ini`, change the url
+    `http://your.host/~your_user/parrot.tar.gz` to point to the url of
+    the parrot tarball that you copied previously. And set correctly the
+    value of the exported local directory (/tmp/your\_user) and the
+    CVMFS servers (repository keys and proxy)
+2.  Run SkeletonKey on `rjob_test.ini`: \
+
+ skeleton\_key -c
+rjob\_test.ini \ 1. Run the job wrapper locally to verify that
+it's working correctly \
+ sh ./job\_script.sh
+test.output \
+
+## Submitting the job wrapper via Bosco
+
+Once the job wrapper has been verified to work, multiple instances can
+be run via Bosco. In this example we'll run 5 instances (`queue=5` in
+the submit file).
+
+Create =/tmp/bosco\_logs = for the log and output files for Bosco \
  mkdir
+/tmp/bosco\_logs \
+
+Create Bosco (HTCondor) submit file called `rjob_test.submit` with the
+following contents \
 universe = grid grid\_resource =
+batch %RED%pbs
+%% notification=never
+executable = ./job\_script.sh arguments = test.output.$(Process) output
+= /tmp/bosco\_logs/rjtest\_$(Cluster).$(Process).out error =
+/tmp/bosco\_logs/rjtest\_$(Cluster).$(Process).err log =
+/tmp/bosco\_logs/rjtest.log ShouldTransferFiles = YES
+when\_to\_transfer\_output = ON\_EXIT queue 5 \ Make sure to
+specify the grid\_resource line as it was suggested when you added the
+cluster with `bosco_cluster --add`. You can use also vanilla jobs but
+there may be some more network requirement about the Bosco submit host.
+See [Trash/CampusGrids.BoSCO](Trash/CampusGrids.BoSCO) for more
+information.
+
+Finally submit the job to Bosco and verify that the jobs ran
+successfully\
+ condor\_submit
+rjob\_test.submit \
+
+Something to note in the HTCondor submit file, is that we're passing the
+name of the output file that should be written using the `arguments`
+setting and the file name we are using includes the `$(Process)`
+variable to ensure that each queued job writes to a different file.
+HTCondor will pass the variable to the job\_script.sh which then makes
+sure that it gets appended to the arguments passed to the `myapp.sh`
+script.
+
+\-- Main.MarcoMambelli - 18 Apr 2013
diff --git a/docs/TestBoSCO.md b/docs/TestBoSCO.md
new file mode 100644
index 0000000..8ea0bf1
--- /dev/null
+++ b/docs/TestBoSCO.md
@@ -0,0 +1,567 @@
+%META:TOPICINFO{author="KyleGross" date="1481047997" format="1.1"
+version="1.18"}%
+
+# Testing BOSCO
+
+
+
+
+\---\# Introduction This document describes a standard test of BOSCO. It
+is for BOSCO developers and testers. It requires some knowledge of BOSCO
+and Condor.
+
+The BOSCO submit node is the host where BOSCO is installed and where
+user/s login to submit jobs via BOSCO. The multiple clusters added to
+BOSCO (i.e. where the user/s can submit jobs via BOSCO) are referred as
+BOSCO resources. If you are not familiar with these terms or would like
+more information about the BOSCO architecture, please check BoSCOv1p1.
+
+\---\#\# Known Issues
+
+  - [57](https://jira.opensciencegrid.org/browse/CAMPUS-57) Depreciation
+    warning on cluster add.
+
+\---\# Performing basic tests
+
+Always:
+
+  - Note the BOSCO/Condor version, output of `condor_version`
+  - Note the platform of the BOSCO submit node (OS, version, 32/64 bit)
+  - Note the platform of each BOSCO resource and its queue manager (if
+    you have any resource added)
+
+\---\#\# Install test Perform the installation as described in
+BoscoInstall or BoscoMultiUser then start BOSCO with `bosco_start`.
+Verify that condor\_q is working.
+
+Report which platform was installed and if there is any error.
+
+Verify that `bosco_cluster --list` and `findplatform` work correctly.
+
+\---\#\# Add clusters Add the BOSCO resource as described in
+BoscoInstall or BoscoMultiUser (`bosco_cluster --add`) then test the
+resource with `bosco_cluster --test NAME`.
+
+Report the output of `bosco_cluster --list`, the NAME of the resource,
+jobmanager type, if the job completed successfully, the stdout of the
+test, the username used, if it was single pr multi user condor and
+important notes.
+
+\---\#\# Run BOSCO (Grid) Jobs Run jobs as the `bosco` user. Submit
+using the Condor "grid" universe for the specific cluster you added
+(PBS, Condor, SGE, ...). For an example check the [BOSCO
+documentation](BoSCOv1p1#6_2_1_Direct_Job_submission_exam).
+
+Report if it ran correctly and on which resource it ran.
+
+If you have multiple resources, verify that the jobs run on all of them,
+one at the time.
+
+\---\#\# Remove clusters Remove the BOSCO resource as described in
+BoscoInstall or BoscoMultiUser (`bosco_cluster --remove`)
+
+Verify that the command runs correctly and that the resource is no more
+in the output of `bosco_cluster --list`.
+
+\---\#\# Uninstall You can run this test after all the others to avoid
+to re-install BOSCO.
+
+Stop and/or Remove BOSCO:
+
+  -  `bosco_stop` stops BOSCO
+  - `bosco_uninstall` removes the BOSCO installation
+  - `bosco_uninstall --all` removes both the BOSCO installation and the
+    configuration files (list of installed clusters and keys)
+
+Verify that the command runs correctly and that the uninstalled
+directories/files are actually removed.
+
+\---\# BOSCO Advanced (or Multi) user tests These tests verify BOSCO
+with more than one cluster when the Campus Factory is in use.
+
+\---\#\# Run BOSCO Vanilla Jobs Run Condor "vanilla" jobs as the `bosco`
+user for single user BOSCO, as a user different from `bosco` or `root`
+for multi user bosco. For an example check the [BOSCO
+documentation](BoSCOv1p1#6_2_2_Glidein_Job_submission_exa).
+
+Report if it ran correctly and on which resource it ran.
+
+If you have multiple resources, verify that the jobs run on all of them.
+
+\---\# BOSCO multi user tests On top of the basic tests in the multi
+user environment, there are some tests specific for the multi user
+environment (not as important for BOSCO single user)
+
+\---\#\# Flocking to BOSCO Verify that other submit hosts can flock to
+the BOSCO submit host by following the instructions in BoscoMultiUser
+
+\---\#\# Querying BOSCO from a monitoring host Check that a monitoring
+hosts can run commands like `condor_q` or `condor_status` against the
+BOSCO submit host
+
+\---\# BOSCO scalability and reliability tests Previous tests verify
+BOSCO functionalities. This section consider tests measuring scalability
+and/or reliability when running for longer period of time.
+
+\---\#\# Scalability test Submit many jobs to BOSCO (while no other user
+is using it), let it submit them to your available resources and measure
+the results and the time used to complete. Please document in the note
+if you are sending "vanilla" or "grid" universe jobs; how many resources
+you are using, their type, an estimate of the available nodes (e.g. PBS
+cluster with 20 nodes, Condor with 30 nodes - it has 60 but half of it
+is normally used and not available for me). At the and measure how many
+jobs failed (because of BOSCO or the clusters) and how long it took to
+complete all the jobs.
+
+\---\#\# Reliability test
+
+\---\# Test policies Before a release we should test all the
+functionalities above for all the supported queue managers, possibly on
+all the platforms (at least both Mac and Linux).
+
+\---\# Test results
+
+Each person performing a test can copy the template. Then update the
+summary for the version ---\#\# BOSCO 1.1 alpha Summary:
+
+| Test                | Result                              | Date  | Notes |
+| :------------------ | :---------------------------------- | :---- | :---- |
+| **Install**         |  | 11/8  | Marco |
+| **Condor add/test** |  | 11/8  | Marco |
+| **LSF add/test**    |                                     |       |       |
+| **PBS add/test**    |  | 11/30 | Derek |
+| **SGE add/test**    |                                     |       |       |
+| **SLURM add/test**  |                                     |       |       |
+| **Jobs**            |  | 11/30 | Derek |
+| **Flocking**        |                                     |       |       |
+| **Querying**        |                                     |       |       |
+
+\---\#\#\# Individual tests
+
+Test by Marco Mambelli:
+
+  - BOSCO version: $CondorVersion: 7.9.2 Nov 06 2012 BuildID: 76336
+    PRE-RELEASE-UWCS $, $CondorPlatform: x86\_64\_RedHat5 $
+  - Single/Multi user: multi
+  - Platform: RH5
+  - Hosts: uc3-bosco
+
+| Test                | Result                              | Date | Notes                                                               |
+| :------------------ | :---------------------------------- | :--- | :------------------------------------------------------------------ |
+| **Install**         |  |      |                                                                     |
+| **Condor add/test** |  |      |                                                                     |
+| **LSF add/test**    |                                     |      |                                                                     |
+| **PBS add/test**    |  |      | Jobs go on hold                                                     |
+| **SGE add/test**    |  |      | Jobs go on hold                                                     |
+| **SLURM add/test**  |                                     |      |                                                                     |
+| **Jobs**            |                                     |      |                                                                     |
+| **Flocking**        |  |      | CF crashing [39](https://jira.opensciencegrid.org/browse/CAMPUS-39) |
+| **Querying**        |                                     |      |                                                                     |
+
+Notes:
+
+  - Have still to test Derek's 11/9 patch
+
+Test by Derek:
+
+  - BOSCO version:$CondorVersion: 7.9.2 Nov 30 2012 BuildID: 82024 BOSCO
+    $, $CondorPlatform: x86\_64\_RedHat6 $
+  - Single/Multi user: Single
+  - Platform: RH6
+  - Hosts: hcc-cloud instance
+
+| Test                | Result                              | Date  | Notes                                                                             |
+| :------------------ | :---------------------------------- | :---- | :-------------------------------------------------------------------------------- |
+| **Version**         |  | 11/30 |                                                                                   |
+| **Single/Multi**    | Single                              | 11/30 |                                                                                   |
+| **Target Platform** | RH5, SL6                            | 11/30 |                                                                                   |
+| **Tester**          | Derek                               | 11/30 |                                                                                   |
+| **Install**         |  | 11/30 |                                                                                   |
+| **Condor add/test** |  | 11/30 | Worked, but inconvenience: [57](http://jira.opensciencegrid.org/browse/CAMPUS-57) |
+| **LSF add/test**    |                                     |       |                                                                                   |
+| **PBS add/test**    |  | 11/30 | Worked, but inconvenience: [57](http://jira.opensciencegrid.org/browse/CAMPUS-57) |
+| **SGE add/test**    |  | 11/30 | Worked, but inconvenience: [57](http://jira.opensciencegrid.org/browse/CAMPUS-57) |
+| **SLURM add/test**  |                                     |       |                                                                                   |
+| **Grid Jobs**       |  | 11/30 | Test is a grid job                                                                |
+| **Vanilla Jobs**    |  | 11/30 | Glideins started and ran test job                                                 |
+| **Flocking**        |                                     |       |                                                                                   |
+| **Querying**        |                                     |       |                                                                                   |
+
+Test by Marco:
+
+  - BOSCO version: (BOSCO 1.1 alpha3) $CondorVersion: 7.9.2 Nov 30 2012
+    BuildID: 82024 BOSCO $ $CondorPlatform: x86\_64\_RedHat5 $
+  - Single/Multi user: multi
+  - Platform: SL5
+  - Hosts: uc3-bosco
+  - Tester: Marco
+
+| Test                | Result                              | Date       | Notes                                                                                |
+| :------------------ | :---------------------------------- | :--------- | :----------------------------------------------------------------------------------- |
+| **Install**         |  | 12/3       |                                                                                      |
+| **Condor add/test** |  | 12/4       |                                                                                      |
+| **LSF add/test**    |                                     |            |                                                                                      |
+| **PBS add/test**    |  | 12/3       | Test job is OK                                                                       |
+| **SGE add/test**    |  | 12/4       |  Some jobs go on hold. BLAH\_JOB\_STATUS timed out |
+| **SLURM add/test**  |  | 12/4 and 7 | LSURM via PBS emulation                                                              |
+| **Grid Jobs**       |  |            | Only on Condor and PBS                                                               |
+| **Vanilla Jobs**    |  | 12/5       | On the supported clusters                                                            |
+| **Flocking**        |                                     |            |                                                                                      |
+| **Querying**        |  | 12/4       | condor\_status -pool uc3-bosco.uchicago.edu:11000?sock=collector                     |
+
+
+%TWISTY{%TWISTY_OPTS_OUTPUT% showlink="Click to show the test notes"}%
+Notes:
+   * Example of job going on hold on SGE (=/opt/bosco/local.uc3-bosco/bosco-test/tmp.fJeWQ16622/=): 
000 (002.000.000) 12/04 17:28:49 Job submitted from host: <128.135.158.154:11000?sock=15493_1853_3>
+...
+027 (002.000.000) 12/04 17:28:57 Job submitted to grid resource
+    GridResource: batch sge uc3@siraf-login.bsd.uchicago.edu
+    GridJobId: batch sge uc3-bosco.uchicago.edu_11000_uc3-bosco.uchicago.edu#2.0#1354663729 sge/20121204173819/868604
+...
+012 (002.000.000) 12/04 17:35:03 Job was held.
+	BLAH_JOB_STATUS timed out
+	Code 0 Subcode 0
+...
+
and job completing OK:
-bash-3.2$ cat /opt/bosco/local.uc3-bosco/bosco-test/tmp.OklFU17319/logfile 
+000 (003.000.000) 12/04 17:41:55 Job submitted from host: <128.135.158.154:11000?sock=15493_1853_3>
+...
+027 (003.000.000) 12/04 17:42:05 Job submitted to grid resource
+    GridResource: batch sge uc3@siraf-login.bsd.uchicago.edu
+    GridJobId: batch sge uc3-bosco.uchicago.edu_11000_uc3-bosco.uchicago.edu#3.0#1354664515 sge/20121204175125/868605
+...
+005 (003.000.000) 12/04 17:43:40 Job terminated.
+	(1) Normal termination (return value 0)
+		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
+		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
+		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
+		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
+	0  -  Run Bytes Sent By Job
+	0  -  Run Bytes Received By Job
+	0  -  Total Bytes Sent By Job
+	0  -  Total Bytes Received By Job
+...
+
+ * SLURM via PBS submission:
+12/07/12 13:29:38 [23823] Trying to update collector <128.135.158.154:11000?sock=collector>
+12/07/12 13:29:38 [23823] Attempting to send update via TCP to collector uc3-bosco.uchicago.edu <128.135.158.154:11000?sock=collector>
+12/07/12 13:29:38 [23823] (19.0) doEvaluateState called: gmState GM_INIT, remoteState 0
+12/07/12 13:29:38 [23823] GAHP server pid = 24844
+12/07/12 13:29:38 [23823] GAHP[24844] (stderr) -> Agent pid 24863
+12/07/12 13:29:39 [23823] GAHP server version: $GahpVersion: 1.16.5 Mar 31 2008 INFN blahpd (poly,new_esc_format) $
+12/07/12 13:29:39 [23823] GAHP[24844] <- 'COMMANDS'
+12/07/12 13:29:39 [23823] GAHP[24844] -> 'S' 'ASYNC_MODE_OFF' 'ASYNC_MODE_ON' 'BLAH_GET_HOSTPORT' 'BLAH_JOB_CANCEL' 'BLAH_JOB_HOLD' 'BLAH_JOB_REFRESH_PROXY' 'BLAH_JOB_RESUME' 'BLAH_JOB_SEND_PROXY_TO_WORKER_NODE' 'BLAH_JOB_STATUS' 'BLAH_JOB_SUBMIT' 'BLAH_SET_GLEXEC_DN' 'BLAH_SET_GLEXEC_OFF' 'BLAH_SET_SUDO_ID' 'BLAH_SET_SUDO_OFF' 'COMMANDS' 'QUIT' 'RESULTS' 'VERSION'
+12/07/12 13:29:39 [23823] GAHP[24844] <- 'ASYNC_MODE_ON'
+12/07/12 13:29:39 [23823] GAHP[24844] -> 'S' 'Async mode on'
+12/07/12 13:29:39 [23823] GAHP server pid = 24869
+12/07/12 13:29:39 [23823] GAHP[24869] (stderr) -> Agent pid 24885
+12/07/12 13:29:40 [23823] GAHP[24869] (stderr) -> Allocated port 55604 for remote forward to
+12/07/12 13:29:40 [23823] GAHP server version: $GahpVersion 2.0.1 Jul 30 2012 Condor_FT_GAHP $
+12/07/12 13:29:40 [23823] GAHP[24869] <- 'COMMANDS'
+12/07/12 13:29:40 [23823] GAHP[24869] -> 'S' 'DOWNLOAD_SANDBOX' 'UPLOAD_SANDBOX' 'DESTROY_SANDBOX' 'ASYNC_MODE_ON' 'ASYNC_MODE_OFF' 'RESULTS' 'QUIT' 'VERSION' 'COMMANDS'
+12/07/12 13:29:40 [23823] GAHP[24869] <- 'ASYNC_MODE_ON'
+12/07/12 13:29:40 [23823] GAHP[24869] -> 'S'
+12/07/12 13:29:40 [23823] (19.0) gm state change: GM_INIT -> GM_START
+12/07/12 13:29:40 [23823] (19.0) gm state change: GM_START -> GM_CLEAR_REQUEST
+12/07/12 13:29:40 [23823] (19.0) gm state change: GM_CLEAR_REQUEST -> GM_UNSUBMITTED
+12/07/12 13:29:40 [23823] (19.0) gm state change: GM_UNSUBMITTED -> GM_SAVE_SANDBOX_ID
+12/07/12 13:29:42 [23823] (18.0) doEvaluateState called: gmState GM_SUBMITTED, remoteState 2
+12/07/12 13:29:42 [23823] (18.0) gm state change: GM_SUBMITTED -> GM_POLL_ACTIVE
+12/07/12 13:29:42 [23823] GAHP[23829] <- 'BLAH_JOB_STATUS 22 condor/150972//'
+12/07/12 13:29:42 [23823] GAHP[23829] -> 'S'
+12/07/12 13:29:43 [23823] (16.0) doEvaluateState called: gmState GM_SUBMITTED, remoteState 2
+12/07/12 13:29:43 [23823] (16.0) gm state change: GM_SUBMITTED -> GM_POLL_ACTIVE
+12/07/12 13:29:43 [23823] GAHP[23829] <- 'BLAH_JOB_STATUS 23 condor/150971//'
+12/07/12 13:29:43 [23823] GAHP[23829] -> 'S'
+12/07/12 13:29:43 [23823] resource mmb@midway-login2.rcc.uchicago.edu is now up
+12/07/12 13:29:43 [23823] in doContactSchedd()
+12/07/12 13:29:43 [23823] SharedPortClient: sent connection request to schedd at <128.135.158.154:11000> for shared port id 23597_7536_3
+12/07/12 13:29:43 [23823] querying for removed/held jobs
+12/07/12 13:29:43 [23823] Using constraint ((Owner=?="uc3"&&JobUniverse==9)) && ((Managed =!= "ScheddDone")) && (JobStatus == 3 || JobStatus == 4 || (JobStatus == 5 && Managed =?= "External"))
+12/07/12 13:29:43 [23823] Fetched 0 job ads from schedd
+12/07/12 13:29:43 [23823] Updating classad values for 19.0:
+12/07/12 13:29:43 [23823]    GridJobId = "batch pbs uc3-bosco.uchicago.edu_11000_uc3-bosco.uchicago.edu#19.0#1354908575"
+12/07/12 13:29:43 [23823]    LastRemoteStatusUpdate = 1354908580
+12/07/12 13:29:43 [23823] leaving doContactSchedd()
+12/07/12 13:29:43 [23823] (19.0) doEvaluateState called: gmState GM_SAVE_SANDBOX_ID, remoteState 0
+12/07/12 13:29:43 [23823] (19.0) gm state change: GM_SAVE_SANDBOX_ID -> GM_TRANSFER_INPUT
+12/07/12 13:29:43 [23823] entering FileTransfer::Init
+12/07/12 13:29:43 [23823] entering FileTransfer::SimpleInit
+12/07/12 13:29:43 [23823] Entering FileTransfer::InitDownloadFilenameRemaps
+12/07/12 13:29:43 [23823] FILETRANSFER: protocol "http" handled by "/opt/bosco/libexec/curl_plugin"
+12/07/12 13:29:43 [23823] FILETRANSFER: protocol "ftp" handled by "/opt/bosco/libexec/curl_plugin"
+12/07/12 13:29:43 [23823] FILETRANSFER: protocol "file" handled by "/opt/bosco/libexec/curl_plugin"
+12/07/12 13:29:43 [23823] FILETRANSFER: protocol "data" handled by "/opt/bosco/libexec/data_plugin"
+12/07/12 13:29:43 [23823] GAHP[24869] <- 'DOWNLOAD_SANDBOX 2 uc3-bosco.uchicago.edu_11000_uc3-bosco.uchicago.edu#19.0#1354908575 [\ Out\ =\ "_condor_stdout";\ TransferOutput\ =\ "";\ Cmd\ =\ "/bin/echo";\ CurrentTime\ =\ time();\ In\ =\ "/dev/null";\ Err\ =\ "_condor_stderr";\ TransferSocket\ =\ "<127.0.0.1:55604?sock=23604_e739_1>";\ TransferKey\ =\ "1#50c243a765f4a9f07e772aab";\ Iwd\ =\ "/share/home/osgvo/uc3/test-condor";\ TransferExecutable\ =\ false\ ]'
+12/07/12 13:29:43 [23823] GAHP[24869] -> 'S'
+12/07/12 13:29:43 [23823] DaemonCore: No more children processes to reap.
+12/07/12 13:29:43 [23823] IPVERIFY: checking uc3-bosco.uchicago.edu against 128.135.158.154
+12/07/12 13:29:43 [23823] IPVERIFY: matched 128.135.158.154 to 128.135.158.154
+12/07/12 13:29:43 [23823] IPVERIFY: ip found is 1
+12/07/12 13:29:43 [23823] entering FileTransfer::HandleCommands
+12/07/12 13:29:43 [23823] FileTransfer::HandleCommands read transkey=1#50c243a765f4a9f07e772aab
+12/07/12 13:29:43 [23823] Directory::setOwnerPriv() -- path /opt/bosco/local.uc3-bosco/spool/0/0/cluster0.proc0.subproc0.tmp does not exist (yet).
+12/07/12 13:29:43 [23823] Directory::Rewind(): path "/opt/bosco/local.uc3-bosco/spool/0/0/cluster0.proc0.subproc0.tmp" does not exist (yet) 
+12/07/12 13:29:43 [23823] Directory::setOwnerPriv() -- path /opt/bosco/local.uc3-bosco/spool/0/0/cluster0.proc0.subproc0 does not exist (yet).
+12/07/12 13:29:43 [23823] Directory::Rewind(): path "/opt/bosco/local.uc3-bosco/spool/0/0/cluster0.proc0.subproc0" does not exist (yet) 
+12/07/12 13:29:43 [23823] entering FileTransfer::Upload
+12/07/12 13:29:43 [23823] entering FileTransfer::UploadThread
+12/07/12 13:29:43 [23823] entering FileTransfer::DoUpload
+12/07/12 13:29:43 [23823] DoUpload: exiting at 3060
+12/07/12 13:29:44 [23823] DaemonCore: No more children processes to reap.
+12/07/12 13:29:44 [23823] File transfer completed successfully.
+12/07/12 13:29:44 [23823] GAHP[24869] <- 'RESULTS'
+12/07/12 13:29:44 [23823] GAHP[24869] -> 'R'
+12/07/12 13:29:44 [23823] GAHP[24869] -> 'S' '1'
+12/07/12 13:29:44 [23823] GAHP[24869] -> '2' 'NULL' '/home/mmb/bosco/sandbox/a551/a551874d/uc3-bosco.uchicago.edu_11000_uc3-bosco.uchicago.edu#19.0#1354908575'
+12/07/12 13:29:44 [23823] (19.0) doEvaluateState called: gmState GM_TRANSFER_INPUT, remoteState 0
+12/07/12 13:29:44 [23823] (19.0) gm state change: GM_TRANSFER_INPUT -> GM_SUBMIT
+12/07/12 13:29:44 [23823] GAHP[24844] <- 'BLAH_JOB_SUBMIT 2 [\ Out\ =\ "_condor_stdout";\ Environment\ =\ "";\ gridtype\ =\ "pbs";\ GridResource\ =\ "batch\ pbs\ mmb@midway-login2.rcc.uchicago.edu";\ Cmd\ =\ "/bin/echo";\ Args\ =\ "Hello";\ CurrentTime\ =\ time();\ Err\ =\ "_condor_stderr";\ In\ =\ "/dev/null";\ Iwd\ =\ "/home/mmb/bosco/sandbox/a551/a551874d/uc3-bosco.uchicago.edu_11000_uc3-bosco.uchicago.edu#19.0#1354908575"\ ]'
+12/07/12 13:29:44 [23823] GAHP[24844] -> 'S'
+12/07/12 13:29:45 [23823] GAHP[24844] <- 'RESULTS'
+12/07/12 13:29:45 [23823] GAHP[24844] -> 'R'
+12/07/12 13:29:45 [23823] GAHP[24844] -> 'S' '1'
+12/07/12 13:29:45 [23823] GAHP[24844] -> '2' '0' 'No error' 'pbs/20121207/Submitted' 'batch' 'job' '3325100'
+12/07/12 13:29:45 [23823] (19.0) doEvaluateState called: gmState GM_SUBMIT, remoteState 0
+12/07/12 13:29:45 [23823] ERROR "Bad BLAH_JOB_SUBMIT Result" at line 3517 in file /slots/12/dir_25528/userdir/src/condor_gridmanager/gahp-client.cpp
+12/07/12 13:34:35 OpSysMajorVersion:  5 
+12/07/12 13:34:35 OpSysShortName:  SL 
+12/07/12 13:34:35 OpSysLongName:  Scientific Linux SL release 5.5 (Boron) 
+12/07/12 13:34:35 OpSysAndVer:  SL5 
+12/07/12 13:34:35 OpSysLegacy:  LINUX 
+12/07/12 13:34:35 OpSysName:  SL 
+12/07/12 13:34:35 OpSysVer:  505 
+12/07/12 13:34:35 OpSys:  LINUX 
+12/07/12 13:34:35 Using processor count: 4 processors, 4 CPUs, 0 HTs
+12/07/12 13:34:35 Enumerating interfaces: lo 127.0.0.1 up
+12/07/12 13:34:35 Enumerating interfaces: eth0 10.1.3.89 up
+12/07/12 13:34:35 Enumerating interfaces: eth1 128.135.158.154 up
+12/07/12 13:34:35 Can't open directory "/config" as PRIV_USER, errno: 2 (No such file or directory)
+12/07/12 13:34:35 passwd_cache::cache_uid(): getpwnam("condor") failed: user not found
+12/07/12 13:34:35 passwd_cache::cache_uid(): getpwnam("condor") failed: user not found
+12/07/12 13:34:35 Setting maximum accepts per cycle 8.
+
+ * SLURM via PBS test 2:
12/12/12 09:02:39 [363] (24.0) doEvaluateState called: gmState GM_INIT, remoteState 0
+12/12/12 09:02:39 [363] GAHP server pid = 392
+12/12/12 09:02:39 [363] GAHP[392] (stderr) -> Agent pid 413
+12/12/12 09:02:40 [363] GAHP server version: $GahpVersion: 1.16.5 Mar 31 2008 INFN blahpd (poly,new_esc_format) $
+12/12/12 09:02:40 [363] GAHP[392] <- 'COMMANDS'
+12/12/12 09:02:40 [363] GAHP[392] -> 'S' 'ASYNC_MODE_OFF' 'ASYNC_MODE_ON' 'BLAH_GET_HOSTPORT' 'BLAH_JOB_CANCEL' 'BLAH_JOB_HOLD' 'BLAH_JOB_REFRESH_PROXY' 'BLAH_JOB_RESUME' 'BLAH_JOB_SEND_PROXY_TO_WORKER_NODE' 'BLAH_JOB_STATUS' 'BLAH_JOB_SUBMIT' 'BLAH_SET_GLEXEC_DN' 'BLAH_SET_GLEXEC_OFF' 'BLAH_SET_SUDO_ID' 'BLAH_SET_SUDO_OFF' 'COMMANDS' 'QUIT' 'RESULTS' 'VERSION'
+12/12/12 09:02:40 [363] GAHP[392] <- 'ASYNC_MODE_ON'
+12/12/12 09:02:40 [363] GAHP[392] -> 'S' 'Async mode on'
+12/12/12 09:02:40 [363] GAHP server pid = 419
+12/12/12 09:02:40 [363] GAHP[419] (stderr) -> Agent pid 435
+12/12/12 09:02:42 [363] GAHP[419] (stderr) -> Allocated port 49121 for remote forward to
+12/12/12 09:02:42 [363] GAHP server version: $GahpVersion 2.0.1 Jul 30 2012 Condor_FT_GAHP $
+12/12/12 09:02:42 [363] GAHP[419] <- 'COMMANDS'
+12/12/12 09:02:42 [363] GAHP[419] -> 'S' 'DOWNLOAD_SANDBOX' 'UPLOAD_SANDBOX' 'DESTROY_SANDBOX' 'ASYNC_MODE_ON' 'ASYNC_MODE_OFF' 'RESULTS' 'QUIT' 'VERSION' 'COMMANDS'
+12/12/12 09:02:42 [363] GAHP[419] <- 'ASYNC_MODE_ON'
+12/12/12 09:02:42 [363] GAHP[419] -> 'S'
+12/12/12 09:02:42 [363] (24.0) gm state change: GM_INIT -> GM_START
+12/12/12 09:02:42 [363] (24.0) gm state change: GM_START -> GM_CLEAR_REQUEST
+12/12/12 09:02:42 [363] (24.0) gm state change: GM_CLEAR_REQUEST -> GM_UNSUBMITTED
+12/12/12 09:02:42 [363] (24.0) gm state change: GM_UNSUBMITTED -> GM_SAVE_SANDBOX_ID
+12/12/12 09:02:42 [363] SharedPortClient: sent connection request to collector uc3-bosco.uchicago.edu:11000?sock=collector for shared port id collector
+12/12/12 09:02:42 [363] Evaluating staleness of remote job statuses.
+12/12/12 09:02:44 [363] resource mmb@midway-login2.rcc.uchicago.edu is now up
+12/12/12 09:02:44 [363] in doContactSchedd()
+12/12/12 09:02:44 [363] SharedPortClient: sent connection request to schedd at <128.135.158.154:11000> for shared port id 23597_7536_3
+12/12/12 09:02:44 [363] querying for removed/held jobs
+12/12/12 09:02:44 [363] Using constraint ((Owner=?="uc3"&&JobUniverse==9)) && ((Managed =!= "ScheddDone")) && (JobStatus == 3 || JobStatus == 4 || (JobStatus == 5 && Managed =?= "External"))
+12/12/12 09:02:44 [363] Fetched 0 job ads from schedd
+12/12/12 09:02:44 [363] Updating classad values for 24.0:
+12/12/12 09:02:44 [363]    GridJobId = "batch pbs uc3-bosco.uchicago.edu_11000_uc3-bosco.uchicago.edu#24.0#1355324556"
+12/12/12 09:02:44 [363]    LastRemoteStatusUpdate = 1355324562
+12/12/12 09:02:44 [363] leaving doContactSchedd()
+12/12/12 09:02:44 [363] (24.0) doEvaluateState called: gmState GM_SAVE_SANDBOX_ID, remoteState 0
+12/12/12 09:02:44 [363] (24.0) gm state change: GM_SAVE_SANDBOX_ID -> GM_TRANSFER_INPUT
+12/12/12 09:02:44 [363] entering FileTransfer::Init
+12/12/12 09:02:44 [363] entering FileTransfer::SimpleInit
+12/12/12 09:02:44 [363] Entering FileTransfer::InitDownloadFilenameRemaps
+12/12/12 09:02:44 [363] FILETRANSFER: protocol "http" handled by "/opt/bosco/libexec/curl_plugin"
+12/12/12 09:02:44 [363] FILETRANSFER: protocol "ftp" handled by "/opt/bosco/libexec/curl_plugin"
+12/12/12 09:02:44 [363] FILETRANSFER: protocol "file" handled by "/opt/bosco/libexec/curl_plugin"
+12/12/12 09:02:44 [363] FILETRANSFER: protocol "data" handled by "/opt/bosco/libexec/data_plugin"
+12/12/12 09:02:44 [363] GAHP[419] <- 'DOWNLOAD_SANDBOX 2 uc3-bosco.uchicago.edu_11000_uc3-bosco.uchicago.edu#24.0#1355324556 [\ Out\ =\ "_condor_stdout";\ TransferOutput\ =\ "";\ Cmd\ =\ "/bin/echo";\ CurrentTime\ =\ time();\ In\ =\ "/dev/null";\ Err\ =\ "_condor_stderr";\ TransferSocket\ =\ "<127.0.0.1:49121?sock=23604_e739_20>";\ TransferKey\ =\ "1#50c89c9422ead8da608694b0";\ Iwd\ =\ "/share/home/osgvo/uc3/test-condor";\ TransferExecutable\ =\ false\ ]'
+12/12/12 09:02:44 [363] GAHP[419] -> 'S'
+12/12/12 09:02:44 [363] DaemonCore: No more children processes to reap.
+12/12/12 09:02:44 [363] IPVERIFY: checking uc3-bosco.uchicago.edu against 128.135.158.154
+12/12/12 09:02:44 [363] IPVERIFY: matched 128.135.158.154 to 128.135.158.154
+12/12/12 09:02:44 [363] IPVERIFY: ip found is 1
+12/12/12 09:02:44 [363] entering FileTransfer::HandleCommands
+12/12/12 09:02:44 [363] FileTransfer::HandleCommands read transkey=1#50c89c9422ead8da608694b0
+12/12/12 09:02:44 [363] Directory::setOwnerPriv() -- path /opt/bosco/local.uc3-bosco/spool/0/0/cluster0.proc0.subproc0.tmp does not exist (yet).
+12/12/12 09:02:44 [363] Directory::Rewind(): path "/opt/bosco/local.uc3-bosco/spool/0/0/cluster0.proc0.subproc0.tmp" does not exist (yet) 
+12/12/12 09:02:44 [363] Directory::setOwnerPriv() -- path /opt/bosco/local.uc3-bosco/spool/0/0/cluster0.proc0.subproc0 does not exist (yet).
+12/12/12 09:02:44 [363] Directory::Rewind(): path "/opt/bosco/local.uc3-bosco/spool/0/0/cluster0.proc0.subproc0" does not exist (yet) 
+12/12/12 09:02:44 [363] entering FileTransfer::Upload
+12/12/12 09:02:44 [363] entering FileTransfer::UploadThread
+12/12/12 09:02:44 [363] entering FileTransfer::DoUpload
+12/12/12 09:02:44 [363] DoUpload: exiting at 3060
+12/12/12 09:02:44 [363] DaemonCore: No more children processes to reap.
+12/12/12 09:02:44 [363] File transfer completed successfully.
+12/12/12 09:02:44 [363] GAHP[419] <- 'RESULTS'
+12/12/12 09:02:44 [363] GAHP[419] -> 'R'
+12/12/12 09:02:44 [363] GAHP[419] -> 'S' '1'
+12/12/12 09:02:44 [363] GAHP[419] -> '2' 'NULL' '/home/mmb/bosco/sandbox/13e7/13e72caf/uc3-bosco.uchicago.edu_11000_uc3-bosco.uchicago.edu#24.0#1355324556'
+12/12/12 09:02:44 [363] (24.0) doEvaluateState called: gmState GM_TRANSFER_INPUT, remoteState 0
+12/12/12 09:02:44 [363] (24.0) gm state change: GM_TRANSFER_INPUT -> GM_SUBMIT
+12/12/12 09:02:44 [363] GAHP[392] <- 'BLAH_JOB_SUBMIT 2 [\ Out\ =\ "_condor_stdout";\ Environment\ =\ "";\ gridtype\ =\ "pbs";\ GridResource\ =\ "batch\ pbs\ mmb@midway-login2.rcc.uchicago.edu";\ Cmd\ =\ "/bin/echo";\ Args\ =\ "Hello";\ CurrentTime\ =\ time();\ Err\ =\ "_condor_stderr";\ In\ =\ "/dev/null";\ Iwd\ =\ "/home/mmb/bosco/sandbox/13e7/13e72caf/uc3-bosco.uchicago.edu_11000_uc3-bosco.uchicago.edu#24.0#1355324556"\ ]'
+12/12/12 09:02:44 [363] GAHP[392] -> 'S'
+12/12/12 09:02:46 [363] GAHP[392] <- 'RESULTS'
+12/12/12 09:02:46 [363] GAHP[392] -> 'R'
+12/12/12 09:02:46 [363] GAHP[392] -> 'S' '1'
+12/12/12 09:02:46 [363] GAHP[392] -> '2' '0' 'No error' 'pbs/20121212/3369743'
+12/12/12 09:02:46 [363] (24.0) doEvaluateState called: gmState GM_SUBMIT, remoteState 0
+12/12/12 09:02:46 [363] directory_util::rec_touch_file: Creating directory /tmp 
+12/12/12 09:02:46 [363] directory_util::rec_touch_file: Creating directory /tmp/condorLocks 
+12/12/12 09:02:46 [363] directory_util::rec_touch_file: Creating directory /tmp/condorLocks/27 
+12/12/12 09:02:46 [363] directory_util::rec_touch_file: Creating directory /tmp/condorLocks/27/19 
+12/12/12 09:02:46 [363] FileLock object is updating timestamp on: /tmp/condorLocks/27/19/50634547879889.lockc
+12/12/12 09:02:46 [363] WriteUserLog::initialize: opened /opt/bosco/local.uc3-bosco/bosco-test/tmp.MWflFJN353/logfile successfully
+12/12/12 09:02:46 [363] (24.0) Writing grid submit record to user logfile
+12/12/12 09:02:46 [363] FileLock::obtain(1) - @1355324566.457000 lock on /tmp/condorLocks/27/19/50634547879889.lockc now WRITE
+12/12/12 09:02:46 [363] FileLock::obtain(2) - @1355324566.463418 lock on /tmp/condorLocks/27/19/50634547879889.lockc now UNLOCKED
+12/12/12 09:02:46 [363] FileLock::obtain(1) - @1355324566.463495 lock on /tmp/condorLocks/27/19/50634547879889.lockc now WRITE
+12/12/12 09:02:46 [363] directory_util::rec_clean_up: file /tmp/condorLocks/27/19/50634547879889.lockc has been deleted. 
+12/12/12 09:02:46 [363] Lock file /tmp/condorLocks/27/19/50634547879889.lockc has been deleted. 
+12/12/12 09:02:46 [363] FileLock::obtain(2) - @1355324566.463625 lock on /tmp/condorLocks/27/19/50634547879889.lockc now UNLOCKED
+12/12/12 09:02:46 [363] (24.0) gm state change: GM_SUBMIT -> GM_SUBMIT_SAVE
+12/12/12 09:02:49 [363] in doContactSchedd()
+12/12/12 09:02:49 [363] SharedPortClient: sent connection request to schedd at <128.135.158.154:11000> for shared port id 23597_7536_3
+12/12/12 09:02:49 [363] querying for removed/held jobs
+12/12/12 09:02:49 [363] Using constraint ((Owner=?="uc3"&&JobUniverse==9)) && ((Managed =!= "ScheddDone")) && (JobStatus == 3 || JobStatus == 4 || (JobStatus == 5 && Managed =?= "External"))
+12/12/12 09:02:49 [363] Fetched 0 job ads from schedd
+12/12/12 09:02:49 [363] Updating classad values for 24.0:
+12/12/12 09:02:49 [363]    GridJobId = "batch pbs uc3-bosco.uchicago.edu_11000_uc3-bosco.uchicago.edu#24.0#1355324556 pbs/20121212/3369743"
+12/12/12 09:02:49 [363] leaving doContactSchedd()
+12/12/12 09:02:49 [363] (24.0) doEvaluateState called: gmState GM_SUBMIT_SAVE, remoteState 0
+12/12/12 09:02:49 [363] (24.0) gm state change: GM_SUBMIT_SAVE -> GM_SUBMITTED
+12/12/12 09:03:36 [363] Received CHECK_LEASES signal
+12/12/12 09:03:36 [363] in doContactSchedd()
+12/12/12 09:03:36 [363] SharedPortClient: sent connection request to schedd at <128.135.158.154:11000> for shared port id 23597_7536_3
+12/12/12 09:03:36 [363] querying for renewed leases
+12/12/12 09:03:36 [363] querying for removed/held jobs
+12/12/12 09:03:36 [363] Using constraint ((Owner=?="uc3"&&JobUniverse==9)) && ((Managed =!= "ScheddDone")) && (JobStatus == 3 || JobStatus == 4 || (JobStatus == 5 && Managed =?= "External"))
+12/12/12 09:03:36 [363] Fetched 0 job ads from schedd
+12/12/12 09:03:36 [363] leaving doContactSchedd()
+12/12/12 09:03:40 [363] GAHP[392] <- 'RESULTS'
+12/12/12 09:03:40 [363] GAHP[392] -> 'S' '0'
+12/12/12 09:03:42 [363] GAHP[419] <- 'RESULTS'
+12/12/12 09:03:42 [363] GAHP[419] -> 'S' '0'
+12/12/12 09:03:42 [363] Evaluating staleness of remote job statuses.
+12/12/12 09:03:49 [363] (24.0) doEvaluateState called: gmState GM_SUBMITTED, remoteState 0
+12/12/12 09:03:49 [363] (24.0) gm state change: GM_SUBMITTED -> GM_POLL_ACTIVE
+12/12/12 09:03:49 [363] GAHP[392] <- 'BLAH_JOB_STATUS 3 pbs/20121212/3369743'
+12/12/12 09:03:49 [363] GAHP[392] -> 'S'
+12/12/12 09:03:50 [363] GAHP[392] <- 'RESULTS'
+12/12/12 09:03:50 [363] GAHP[392] -> 'R'
+12/12/12 09:03:50 [363] GAHP[392] -> 'S' '1'
+12/12/12 09:03:50 [363] GAHP[392] -> '3' '1' 'Error allocating memory' '0' 'N/A'
+12/12/12 09:03:50 [363] (24.0) doEvaluateState called: gmState GM_POLL_ACTIVE, remoteState 0
+12/12/12 09:03:50 [363] (24.0) blah_job_status() failed: Error allocating memory
+12/12/12 09:03:50 [363] (24.0) gm state change: GM_POLL_ACTIVE -> GM_HOLD
+12/12/12 09:03:50 [363] directory_util::rec_touch_file: Creating directory /tmp 
+12/12/12 09:03:50 [363] directory_util::rec_touch_file: Creating directory /tmp/condorLocks 
+12/12/12 09:03:50 [363] directory_util::rec_touch_file: Creating directory /tmp/condorLocks/27 
+12/12/12 09:03:50 [363] directory_util::rec_touch_file: Creating directory /tmp/condorLocks/27/19 
+12/12/12 09:03:50 [363] FileLock object is updating timestamp on: /tmp/condorLocks/27/19/50634547879889.lockc
+12/12/12 09:03:50 [363] WriteUserLog::initialize: opened /opt/bosco/local.uc3-bosco/bosco-test/tmp.MWflFJN353/logfile successfully
+12/12/12 09:03:50 [363] (24.0) Writing hold record to user logfile
+12/12/12 09:03:50 [363] FileLock::obtain(1) - @1355324630.888747 lock on /tmp/condorLocks/27/19/50634547879889.lockc now WRITE
+12/12/12 09:03:50 [363] FileLock::obtain(2) - @1355324630.891806 lock on /tmp/condorLocks/27/19/50634547879889.lockc now UNLOCKED
+12/12/12 09:03:50 [363] FileLock::obtain(1) - @1355324630.891882 lock on /tmp/condorLocks/27/19/50634547879889.lockc now WRITE
+12/12/12 09:03:50 [363] directory_util::rec_clean_up: file /tmp/condorLocks/27/19/50634547879889.lockc has been deleted. 
+12/12/12 09:03:50 [363] Lock file /tmp/condorLocks/27/19/50634547879889.lockc has been deleted. 
+12/12/12 09:03:50 [363] FileLock::obtain(2) - @1355324630.892015 lock on /tmp/condorLocks/27/19/50634547879889.lockc now UNLOCKED
+12/12/12 09:03:50 [363] (24.0) gm state change: GM_HOLD -> GM_DELETE
+12/12/12 09:03:50 [363] in doContactSchedd()
+12/12/12 09:03:50 [363] SharedPortClient: sent connection request to schedd at <128.135.158.154:11000> for shared port id 23597_7536_3
+12/12/12 09:03:50 [363] querying for removed/held jobs
+12/12/12 09:03:50 [363] Using constraint ((Owner=?="uc3"&&JobUniverse==9)) && ((Managed =!= "ScheddDone")) && (JobStatus == 3 || JobStatus == 4 || (JobStatus == 5 && Managed =?= "External"))
+12/12/12 09:03:50 [363] Fetched 0 job ads from schedd
+12/12/12 09:03:50 [363] Updating classad values for 24.0:
+12/12/12 09:03:50 [363]    EnteredCurrentStatus = 1355324630
+12/12/12 09:03:50 [363]    HoldReason = "Error allocating memory"
+12/12/12 09:03:50 [363]    HoldReasonCode = 0
+12/12/12 09:03:50 [363]    HoldReasonSubCode = 0
+12/12/12 09:03:50 [363]    JobStatus = 5
+12/12/12 09:03:50 [363]    Managed = "Schedd"
+12/12/12 09:03:50 [363]    NumSystemHolds = 1
+12/12/12 09:03:50 [363]    ReleaseReason = undefined
+12/12/12 09:03:50 [363] No jobs left, shutting down
+12/12/12 09:03:50 [363] leaving doContactSchedd()
+12/12/12 09:03:50 [363] Got SIGTERM. Performing graceful shutdown.
+12/12/12 09:03:50 [363] Started timer to call main_shutdown_fast in 1800 seconds
+12/12/12 09:03:50 [363] **** condor_gridmanager (condor_GRIDMANAGER) pid 363 EXITING WITH STATUS 0
+
+%ENDTWISTY% + + + + + +\---\#\# BOSCO 1.1 beta + +Summary of the BOSCO 1.1 beta tests + + - BOSCO version: 1.1 beta + - Single/Multi user: + - Platform: + - Hosts: + - Tester: + +| Test | Result | Date | Notes | +| :------------------ | :----- | :--- | :---- | +| **Install** | | | | +| **Condor add/test** | | | | +| **LSF add/test** | | | | +| **PBS add/test** | | | | +| **SGE add/test** | | | | +| **SLURM add/test** | | | | +| **Remove resource** | | | | +| **Grid Jobs** | | | | +| **Vanilla Jobs** | | | | +| **Flocking** | | | | +| **Querying** | | | | +| **Uninstall** | | | | + +%TWISTY\_OPTS\_OUTPUT% showlink="Click +to show the test notes" Notes: \* + + +\---\#\#\# Individual tests + +\---\#\# Template + +Test by NAME: + + - BOSCO version: + - Single/Multi user: + - Platform: + - Hosts: + - Tester: + +| Test | Result | Date | Notes | +| :------------------ | :----- | :--- | :---- | +| **Install** | | | | +| **Condor add/test** | | | | +| **LSF add/test** | | | | +| **PBS add/test** | | | | +| **SGE add/test** | | | | +| **SLURM add/test** | | | | +| **Remove resource** | | | | +| **Grid Jobs** | | | | +| **Vanilla Jobs** | | | | +| **Flocking** | | | | +| **Querying** | | | | +| **Uninstall** | | | | + +%TWISTY\_OPTS\_OUTPUT% showlink="Click +to show the test notes" Notes: \* + + +BoscoInstall + +# Comments + + diff --git a/mkdocs.yml b/mkdocs.yml index dfa1f9c..31bf3c0 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -8,6 +8,15 @@ pages: - 'Introduction': 'index.md' - 'Bosco Installation': 'BoscoInstall.md' - 'Bosco MultiUser Installation': 'BoscoMultiUser.md' + - 'Bosco Quick Start': 'BoscoQuickStart.md' + - 'Bosco tests for developers and testers': 'TestBoSCO.md' + - 'BoscoR': 'BoscoR.md' + - 'BOSCO Roadmap (planned and desired features)': 'BoscoRoadmap.md' + - 'BOSCO version 0': 'BoSCOv0.md' + - 'BOSCO version 1': 'BoSCOv1.md' + - 'BOSCO version 1.1': 'BoSCOv1p1.md' + - 'BOSCO version 1.2': 'BoSCOv1p2.md' + - 'CICiForum': 'CICiForum130418.md' extra_css: - css/extra.css