The Slurm queue xfer is available on production clusters to address data transfers between internal CSCS file systems. The queue has been created to transfer files and folders from /users
, /capstor/store
to the /capstor/scratch
or /iopstor/scratch
file systems (stage-in) and vice versa (stage-out). The available filesystems may be different on each cluster. Currently the following commands are available on the cluster supporting the queue xfer:
cp mv rm rsync
You can adjust the Slurm batch script below to transfer your input data on $SCRATCH
, setting the variable command
to the command that you intend to use, choosing from the list given above:
#!/bin/bash -l # #SBATCH --time=24:00:00 #SBATCH --ntasks=1 #SBATCH --partition=xfer command="rsync -av" echo "$SLURM_JOB_NAME started on $(date): executing $command $1 $2" $command $1 $2 result=$? echo "$SLURM_JOB_NAME finished on $(date) with $result" exit $result
The template Slurm batch script above requires two command line arguments, which are the source and the destination files (or folders) to be copied. You can submit the stage job with a meaningful job name as below:
# stage-in $ sbatch --job-name=stage_in stage.sbatch ${PROJECT}/my_folder ${SCRATCH}/my_folder Submitted batch job 12345
It is possible to submit jobs that depend on the successful execution of the data transfer using the dependency flag --dependency=afterok:TRANSFER_JOB_ID
, where TRANSFER_JOB_ID
is the job ID of the transfer job (eg 12345
). This ensures the production job will be scheduled for execution only after the stage job has successfully executed (i.e. ran to completion with an exit code of zero).
sbatch --dependency=afterok:12345 production_job.sh
Additional options for Slurm jobs are described on Running jobs.