Search open search form

Storage

User account Space

Each user account has a quota of 50GB in the home directory. If you are in need of more space, please use your group space.

Group Space

Each research group has a quota of 20TB in /resnick/groups/<groupname>. Anything above this 20TB will be charged at standard storage rates, please have your PI send an email to [email protected] for information. 

Scratch Space

We have moved our scratch space to the Vast filesystem.  It can be accessed at /resnick/scratch.  We recommend you create a directory for yourself there and use that for scratch space from now on. All of this storage is on NVME so there is not need to differentiate for higher io needs.

The /resnick/scratch partition has a default quota of 20T and the quota can be extended to 50T for 30 days upon request. Please send an email to [email protected] for information. 

These disks are truly meant as scratch space. Any files not accessed in 14 days will be automatically purgedAny method of artificially changing the date/time stamps of a file is strictly prohibited and subject to Caltech's Honor Code.

Checking Quotas for User and Group on /resnick

To check you home directory and group area usage you can simply run hpcquota.

If you would also like to see your usage in /resnick/scratch you can add the -s switch to the hpcquota command.

If you would like to see the usage of all users in your group space, you can add the -a switch to the hpcquota command.

Here are the other options availave:

[naveed@head4 ~]$ hpcquota -h
usage: hpcquota [-h] [-v] [-g GROUP] [-u USER] [-a] [-s]

optional arguments:
  -h, --help            show this help message and exit
  -v, --verbose         set args.verbose mode
  -g GROUP, --group GROUP
                        Group to manage
  -u USER, --user USER  User to manage
  -a, --all             Get all user information for group space
  -s, --scratch         Get scratch infromation

Snapshots

The Vast filesystem does have snapshots enabeld. The snapshot directory is not listable, but can be found be changing directory to ".snapshot" .

Backup and Archive

There is no managed BCP/DR style back up nor archival system in place so on the central hpc cluster. Please be sure to migrate any critical data to systems or services outside of the cluster storage on a routine basis. For information on running backups using the Duplicity client see this page. (Duplicity supports saving backups to AWS, Google, Backblaze B2, ssh based hosts and others.)