The Slurm command salloc
allocates a set of nodes, executes a command and then releases the allocation when the command is finished: if no command is specified, then the value of SallocDefaultCommand
in slurm.conf
is used.
If SallocDefaultCommand
is not set, then salloc
runs the user's default shell, which corresponds to the standard behaviour on CSCS computing systems. The screenshot below shows that SallocDefaultCommand
is not defined:
$ scontrol show config | grep Salloc SallocDefaultCommand = (null)
Therefore, even within an interactive session with salloc
, you will still need to use srun
to run a command on the allocated compute node. Otherwise, the command will be executed on the login node, as shown in the following example:
$ salloc -p debug -C gpu --time=00:01:00 $ hostname daint101 $ srun hostname nid03508
In the example above, the linux command hostname
that shows the system's host name is executed within an interactive session with salloc
. When the command is invoked without srun
, the host returned in output is Piz Daint login node daint101
. On the other hand, if the command is invoked with srun
, then it gets executed on the compute node allocated within the ongoing interactive allocation and the host returned in output is the corresponding node name.
Please don't forget that interactive allocations are meant for debugging purposes only and have a limited wall time duration, as explained in Running Jobs. In order to debug interactively on a node, you can also use the command srun
directly:
$ srun -A <account> -C gpu -p debug --time=00:01:00 --pty /bin/bash -l $ hostname nid03509