The DCC has several file systems available for all users of the cluster. General partitions are on Isilon, 40Gbps or 10Gbps network attached storage arrays. For tips using the filesystems, visit the DCC User Guide.

Sensitive data is not permitted on cluster storage.

Shared Scratch Space (/work)

Researchers may temporarily store large volume data in /work. This is a 450TB unpartioned volume that. This space, which is not backed up and is shared across all cluster users. Files in /work are automatically deleted after 75 days. For full details on how /work is managed as a shared resource, visit our Acceptable Use Policy.

View your current usage of /work with: storage-report -u <netid>

Group Storage (/hpc/group/groupname)

DCC groups are automatically granted 1TB of storage. DCC points of contacts are responsible for working with their groups to create a schema for their groups partition as well as establishing policies for retention. Due to the large volume size, backups are only available for 7 days. Groups who need more than 1TB or a longer backup policy can purchase through special arrangement.

View your current volume size and amount used with: df -h /hpc/group/groupname

Home Directories (/hpc/home/netid)

All DCC users are automatically granted a 10GB personal home directory. This partition is brand new to the DCC and allows users to store their own working files without impacting the labs over-all quota. Home directories are backed up according to standard retention policies. When users leave a lab they are responsible for moving any lab relevant files back to the lab before departing. When a users Duke ID is deactivated, their access and home directory is automatically removed from the cluster.

Archival Storage (/datacommons/groupname)

Archival storage is available for researchers at $.08/GB/Year as part of the OIT standard storage options. Options for storage with backups exist at additional cost. Visit for more information. Data Commons is available for a fee to anyone at Duke and is mounted to the DCC to aid in moving data to the cluster for jobs. Because I/O will not be as performant as with cluster storage, job file access should not be configured that will cause excessive read/write to Data Commons storage.