Last updated: 1/1/21 8:30 AM

We are currently investigating two reported problems:

  • sporadic login issues to the DCC login nodes
  • fiber channel performance from some DCC hosts

Research Computing maintenance scheduled Jan 4th to Jan 8th has concluded. Please be aware of the following impacts:

Login nodes can now be found at:

MFA is now required for DCC login. Simplify your login process utilize ssh public key authentication. To setup and enable ssh public key authentication update your “SSH Public keys” under “Advanced User Options” at: When logging in, if you see an error similar to: “negotiation failure” or “algorithm not supported”, please upgrade your ssh client to a newer version.

CPU and RAM resources made available to SLURM from each node are slightly smaller due to the OS change. If your jobs are not running (error: PartitionConfig) please check and reduce the resources you are requesting.

If you have MPI jobs that are experiencing performance issues after the maintenance please email Fixes were installed for some known MPI issues that may require minor changes to individual users scripts to optimize performance.

If you receive an error of the form: “error while loading shared libraries: cannot open shared object file: No such file or directory“, add the line: “export LD_LIBRARY_PATH=/opt/apps/rhel7/compatlib:$LD_LIBRARY_PATH” to your job script.

Hosts Down Post Maintenance

  • dcc-dhvi-gpu-[397-400].rc.duke.ed
  • dcc-carlsonlab-gpu-[05-08]
  • research-tarokhlab-[02-03]

Other Issues

  • CUDA installation for dcc-dhvi-gpu-[01-496] is 10.2 and for dcc-dhvi-gpu-[497-520] is 11.2