CSCS supports different file systems, whose specifications are summarized in the table below:
scratch (Alps) | scratch (Clusters) | /users | /capstor/store | /project | /store | |
---|---|---|---|---|---|---|
Type | Lustre | GPFS | GPFS or VAST | Lustre | GPFS | GPFS |
Quota | Soft (150 TB and 1M files) | None | 50GB/user and 500k files | Defined on a per-project basis | Maximum 50k files/TB | Maximum 50k files/TB |
Expiration | 30 days | 30 days | Account closure | End of the project/contract | End of the project | End of the contract |
Data Backup | None | None | 90 days | 90 days | 90 days | 90 days |
Access Speed | Fast | Fast | Slow | Fast | Medium | Slow |
Capacity | 91 PB | 1.9 PB | 160 TB | 91 PB | 6.0 PB | 7.6 PB |
To check your usage, please type the command quota
on the front end Ela and the User Access Nodes (UANs) of Eiger, Daint.Alps and Santis.
Please build big software projects not fitting $HOME
on $PROJECT
instead. Since you should not run jobs from $HOME
or $PROJECT
, please copy the executables, libraries and data sets needed to run your simulations to $SCRATCH
with the Slurm transfer queue.
Users can also write temporary builds on /dev/shm
, a filesystem using virtual memory rather than a persistent storage device: please note that files older than 24 hours will be deleted automatically.