Knowledge Base Migration
The CSCS Knowledge Base is being migrated to the CSCS Documentation: please check the new page File Systems
CSCS supports different file systems, whose specifications are summarized in the table below:
scratch (Alps) | scratch (Clusters) | /users | /capstor/store | |
---|---|---|---|---|
Type | Lustre | GPFS | VAST | Lustre |
Quota | Soft (150 TB and 1M files) | None | 50GB/user and 500k files | Defined on a per-project basis |
Expiration | 30 days | 30 days | Account closure | End of the project/contract |
Data Backup | None | None | 90 days | 90 days |
Access Speed | Fast | Fast | Slow | Fast |
Capacity | 91 PB | 1.9 PB | 160 TB | 91 PB |
To check your usage, please type the command quota
on the front end Ela and the User Access Nodes (UANs) of Eiger, Daint.Alps and Santis. Please build big software projects not fitting $HOME
on $PROJECT
instead and do not run jobs from $HOME
or $PROJECT
, please copy the executables, libraries and data sets needed to run your simulations to $SCRATCH
with the Slurm transfer queue.
Users can also write temporary builds on /dev/shm
, a filesystem using virtual memory rather than a persistent storage device: please note that files older than 24 hours will be deleted automatically.