12/20 10am, Globus and all GPU nodes are back in services. Please email rescomputing@duke.edu. For critical issues contact the OIT Service Desk (https://oit.duke.edu/help).

12/19 5pm, Maintenance has been completed and the vast majority of research computing resources have been returned to service. 

We are still working with a small number of hosts that did not recover correctly from the updates, and will return them to services as we resolve each issue.

For Duke Compute Cluster users:

  • Storage has been significantly expanded. Update your paths to take advantage of new home directories. For more details, visithttps://rc.duke.edu
  • Lab POCs can now manage DCC group membership using a self-service web interface at: https://rtoolkits.web.duke.edu

What has been restored to service?

  • Duke Compute Cluster (DCC), excluding GPU nodes
  • Research computing resources (cluster and individual virtual machines) in the PRDN and Protected Network
  • Globus endpoints, still down
  • PACE VMs (OIT GPUs)
  • Duke Data Commons storage services

12/19 1pm, Storage maintenance has been completed and we are beginning to restore services to users.

12/17 9:45pm, The majority if ESXi hosts have been upgraded to 6.7 and latest patches.

12/17 4:30pm, Maintenance is progressing as expected. Please note: VMs may appear available as we transition between maintenance tasks, but do not use resources until we indicate that maintenance has concluded.

12/17 8am, Maintenance on Research Computing Resources has started.

Expect outages across all Research Computing resources, including:

  • Duke Compute Cluster (DCC), including home and group directories
  • All research virtual machines (VMs), including Research Toolkits RAPID virtual machines
  • Research computing resources (cluster and individual virtual machines) in the PRDN and Protected Network
  • Globus endpoints
  • PACE VMs (OIT GPU Resources)
  • Duke Data Commons storage services

Visit: https://rc.duke.edu/research-computing-maintenance-outage-12-17-800-am-12-19-500pm/ for full details.