Daily Bulletin Archive

Aug. 14, 2018

8/13/2018 - HPSS downtime: Tuesday, August 14th 7:00 a.m. - 11:00 a.m.

No downtime: Cheyenne, GLADE, Geyser_Caldera

Aug. 12, 2018

8/9/2018 - The Cheyenne system’s share queue is operating with far fewer nodes available to it than normal. CISL is exploring a number of solutions with a priority of restoring the queue as soon as possible and minimizing disruptions to users. The time frame for resolving the issue is not yet known.

Until the issue is resolved users will experience a significant backlog of jobs submitted to the share queue. If job turnaround time in the share queue becomes untenable, users are advised to submit their jobs to one of Cheyenne’s other queues, such as the regular queue, as an interim workaround. Note that jobs that run in the non-shared queues are charged for full use of the nodes and therefore use more core-hours, but those jobs will likely execute sooner than in the share queue in its present state.

Aug. 6, 2018

08/06/2018 - No downtime: Cheyenne, GLADE, Geyser_Caldera and HPSS

Jul. 29, 2018

Cheyenne, Geyser, and Caldera users can now get a quick look at what software environment modules are installed on those systems before they log in. These two documentation pages are updated daily with the output of module spider commands:

For more information about module commands, see our environment modules documentation.

Jul. 29, 2018

Globus team members will present a workshop at NCAR from 8:30 a.m. to 5 p.m. MDT on Wednesday, September 5, for system administrators who have deployed or are planning to deploy Globus, developers building applications for research, and others who are  interested in learning more about the service for research data management.

Place: Center Green Campus, CG1-1210-South-Auditorium, 3080 Center Green Drive, Boulder
Agenda: https://www.globusworld.org/tour/program?c=14  
Registration: https://www.globusworld.org/tour/register
Registration: No charge to attend; space is limited so register early

The session will include hands-on walkthroughs of:

  • Using Globus for file transfer, sharing and publication

  • Installing and configuring Globus endpoints

  • Incorporating Globus capabilities into your own data portals, science gateways, and other web applications

  • Automating research data workflows using Globus CLI and API — including how to automate scripted transfers to and from the new NCAR Campaign Storage

  • Using Globus in conjunction with the Jupyter platform

  • Integrating Globus services into your institutional repository and data publication workflows

  • Using Globus Auth authentication and fine-grained authorization for accessing your own services

Globus (www.globus.org) is a research data management service developed by the University of Chicago and used by hundreds of thousands of researchers at institutions in the U.S. and abroad.

Jul. 27, 2018

The Globus interface for transferring data does not handle symbolic links and does not create a symbolic link on a destination endpoint. This is true in both the web and command line interfaces. If you explicitly request a transfer of a symbolic link, Globus will follow that link and transfer the data that the link points to. More important, if you have symbolic links inside a directory which you copy recursively with Globus, the links will be ignored entirely. You can run the following command to determine if you have symbolic links in your transfer:

find /path/to/folder -type l

Because symbolic links are common in working directories, CISL recommends using the cp or rsync commands to move data between various spaces on GLADE. To move data from old work spaces to new work spaces, for example, use the following recursive copy:

cp -a -r /glade/p_old/work/${USER}/data_directory /glade/work/${USER}

For transfers to and from the new Campaign Storage, and for large transfers to file systems at other sites, CISL still recommends Globus as the easy, fast, and secure option to move data. However, it is important to prepare your data for transfer by identifying and managing your symbolic links. There are two approaches you can take:

  1. If you wish to preserve the linked data, simply replace the symbolic link with the target data using cp.

  2. If you wish to preserve the symbolic links themselves, the easiest approach is to create a tarball containing all of the files you want to copy (including the symbolic links), and then use Globus to transfer that tarball to the target file system.

If you need guidance on which approach is the best for your particular data transfer, please contact cislhelp@ucar.edu with questions.

 

Jul. 24, 2018

The Cheyenne, Geyser, and Caldera clusters will be unavailable Tuesday, July 24, starting at approximately 6 a.m. MDT to allow CISL staff to update key system software components. The outages are expected to last until approximately 6 p.m. Tuesday evening but every effort will be made to return the systems to service as soon as possible.

To minimize impact to running jobs, all Cheyenne batch queues will be suspended at approximately 6:00 p.m. tonight. Running jobs will not be interrupted. After the queues are suspended users will be still able to submit batch jobs but those jobs will be held until the system is returned to service Tuesday evening.  A system reservation will be created on Geyser and Caldera to prevent batch jobs from executing past 6:00 a.m. Tuesday morning.

All batch jobs and interactive processes that are still executing when the outages begin will be killed.  The clusters’ login nodes will be unavailable throughout the outages.

CISL will inform users through the Notifier service when the systems are restored.

Jul. 18, 2018

A recording of the July 9 tutorial, “Using Globus and Campaign Storage,” is now available on the CISL web site. See the course page to review the presentation and download the slides. The 45-minute tutorial familiarizes users with the Globus file transfer system and the new Campaign Storage resource.

Jul. 17, 2018

A new, larger GLADE scratch space is now available for immediate use as /glade/scratch_new/. The new space is built with the latest version of GPFS 5, providing more efficient use of the available storage and improved I/O performance. Users are encouraged to move their existing /glade/scratch files to the new space as soon as possible. As with the new /glade/p file space, users should take note of the changes for their impact on workflows and scripts.

Here are the key changes to be aware of:

  • The present /glade/scratch/ spaces will remain read/write for a period of 30 days.

  • In 30 days the present /glade/scratch/ spaces will be renamed /glade/scratch_old/ and will become read-only. The purge policy for files in /glade/scratch_old/ will be reduced to 30 days.

  • Also in 30 days, /glade/scratch_new will be renamed /glade/scratch.

  • Each user’s current scratch quotas will be preserved in the new scratch space.

  • Effective immediately, all requests for increases to scratch quotas will be made to the new scratch space.

  • /glade/scratch_old will be removed from the system in approximately 60 days, in early September.

CISL recommends using Globus as the most efficient way to transfer files across file systems. Globus monitors progress and automatically validates correctness of each file transfer. Users are asked to remove files from /glade/scratch_old/ once their transfers are complete.

Jul. 17, 2018

HPSS downtime: Tuesday, July 17th 7:30 a.m. - 10:00 a.m.

No downtime: Cheyenne, GLADE, Geyser_Caldera

Pages