CSCS supports different file systems, whose specifications are summarized in the table below:
scratch (Piz Daint) | /capstor/scratch/cscs (Alps) | scratch (Clusters) | /users | /project | /store | |
---|---|---|---|---|---|---|
Type | Lustre | Lustre | GPFS | GPFS | GPFS | GPFS |
Quota | Soft (1M files) | Soft (150 TB and 1M files) | None | 50GB/user and 500k files | Maximum 50k files/TB | Maximum 50k files/TB |
Expiration | 30 days | 30 days | 30 days | Account closure | End of the project | End of the contract |
Data Backup | None | None | None | 90 days | 90 days | 90 days |
Access Speed | Fast | Fast | Fast | Slow | Medium | Slow |
Capacity | 8.8 PB | 91 PB | 1.9 PB | 160 TB | 6.0 PB | 7.6 PB |
To check your usage, please type the command quota
on the front end Ela.
Please build big software projects not fitting $HOME
on $PROJECT
instead. Since you should not run jobs from $HOME
or $PROJECT
, please copy the executables, libraries and data sets needed to run your simulations to $SCRATCH
with the Slurm transfer queue.
Users can also write temporary builds on /dev/shm
, a filesystem using virtual memory rather than a persistent storage device: please note that files older than 24 hours will be deleted automatically.