Daily Bulletin Archive

August 17, 2018

8/20/18 - Registration is now open for an NCAR/CISL series of four one-day workshops on Modern Fortran beginning Tuesday, September 11. Dan Nagle, CISL Consulting Services Group software engineer and a member of the U.S. Fortran Standards Technical Committee, will provide the training at the NCAR Mesa Lab’s Fleischmann Building (Walter Orr Roberts room) in Boulder.

Participants are encouraged to bring their own laptop computers with recent releases of gfortran, mpich, and opencoarrays. Each workshop will begin at 9 a.m. and end at 4 p.m. with an hour break at noon.

  • Scalar Fortran - Tuesday, Sept. 11: Scope, definition, scalar declarations and usage, and interacting with the processor.

  • Vector Fortran - Tuesday, Sept. 18: Arrays, storage order, elemental operations, and array intrinsics.

  • Object-Oriented Fortran - Wednesday, Sept. 26: Derived types, defined operations, defined assignment, and inheritance.

  • Parallel Fortran - Tuesday, Oct. 2: Coarray concepts, declarations, and usage; synchronization and treating failed images.

Use this form to register to attend one or more workshops. The workshops will not be webcast or recorded.

August 15, 2018

Updated 8/15/2018 - A number of the previously announced changes to the GLADE file system are scheduled for Tuesday, August 21. First, /glade/scratch will be moved to /glade/scratch_old and become read-only. The purge policy for files in that space will be 30 days. The /glade/scratch_old space will be removed from the system on October 2.

Also, the new and larger /glade/scratch_new (implemented on July 10) will be renamed to /glade/scratch. The 60-day purge policy for this space will remain in place.

To prepare for the renaming of the file spaces, users are encouraged to begin using the new scratch space as soon as possible and where it is practical to do so.  For example, files remaining on the old scratch space should be moved to the new scratch space.

DateChangeNotes
Aug 21/glade/scratch -> /glade/scratch_oldBecomes read-only
30-day purge policy
Aug 21/glade/scratch_new -> /glade/scratchRetains 60-day purge policy
Oct 2/glade/scratch_old will be removed 
August 14, 2018

8/13/2018 - HPSS downtime: Tuesday, August 14th 7:00 a.m. - 11:00 a.m.

No downtime: Cheyenne, GLADE, Geyser_Caldera

August 14, 2018

8/2/2018 - CISL is now accepting large-scale allocation requests from university-based researchers for the 5.34-petaflops Cheyenne cluster. Submissions are due September 11.

The fall opportunity will include allocations on some new supporting resources. The Casper analysis and visualization cluster will soon enter production to replace the Geyser/Caldera clusters. University projects should request long-term space on the Campaign Storage resource instead of HPSS. Unlike HPSS, Campaign Storage has no default minimum amount; users are asked to justify the amount requested. Scrutiny of the justification will increase with the size of the request.

Researchers are encouraged to review the allocation instructions before preparing their requests. The CISL HPC Allocations Panel (CHAP) is applying increased scrutiny to data management plans and justifications for storage requests. For instructions and information regarding available resources, see the CHAP page: https://www2.cisl.ucar.edu/chap

At the fall meeting, CISL will allocate 180 million core-hours on Cheyenne, up to 2 PB of Campaign Storage space, and up to 500 TB of GLADE project space. Large allocations on Cheyenne are those requesting more than 400,000 core-hours. CISL accepts requests from university researchers for these large-scale allocations every six months.

Please contact cislhelp@ucar.edu if you have any questions.

August 13, 2018

8/9/2018 - The Cheyenne system’s share queue is operating with far fewer nodes available to it than normal. CISL is exploring a number of solutions with a priority of restoring the queue as soon as possible and minimizing disruptions to users. The time frame for resolving the issue is not yet known.

Until the issue is resolved users will experience a significant backlog of jobs submitted to the share queue. If job turnaround time in the share queue becomes untenable, users are advised to submit their jobs to one of Cheyenne’s other queues, such as the regular queue, as an interim workaround. Note that jobs that run in the non-shared queues are charged for full use of the nodes and therefore use more core-hours, but those jobs will likely execute sooner than in the share queue in its present state.

August 3, 2018

08/06/2018 - No downtime: Cheyenne, GLADE, Geyser_Caldera and HPSS

July 30, 2018

Globus team members will present a workshop at NCAR from 8:30 a.m. to 5 p.m. MDT on Wednesday, September 5, for system administrators who have deployed or are planning to deploy Globus, developers building applications for research, and others who are  interested in learning more about the service for research data management.

Place: Center Green Campus, CG1-1210-South-Auditorium, 3080 Center Green Drive, Boulder
Agenda: https://www.globusworld.org/tour/program?c=14  
Registration: https://www.globusworld.org/tour/register
Registration: No charge to attend; space is limited so register early

The session will include hands-on walkthroughs of:

  • Using Globus for file transfer, sharing and publication

  • Installing and configuring Globus endpoints

  • Incorporating Globus capabilities into your own data portals, science gateways, and other web applications

  • Automating research data workflows using Globus CLI and API — including how to automate scripted transfers to and from the new NCAR Campaign Storage

  • Using Globus in conjunction with the Jupyter platform

  • Integrating Globus services into your institutional repository and data publication workflows

  • Using Globus Auth authentication and fine-grained authorization for accessing your own services

Globus (www.globus.org) is a research data management service developed by the University of Chicago and used by hundreds of thousands of researchers at institutions in the U.S. and abroad.

July 30, 2018

Cheyenne, Geyser, and Caldera users can now get a quick look at what software environment modules are installed on those systems before they log in. These two documentation pages are updated daily with the output of module spider commands:

For more information about module commands, see our environment modules documentation.

July 24, 2018

The Globus interface for transferring data does not handle symbolic links and does not create a symbolic link on a destination endpoint. This is true in both the web and command line interfaces. If you explicitly request a transfer of a symbolic link, Globus will follow that link and transfer the data that the link points to. More important, if you have symbolic links inside a directory which you copy recursively with Globus, the links will be ignored entirely. You can run the following command to determine if you have symbolic links in your transfer:

find /path/to/folder -type l

Because symbolic links are common in working directories, CISL recommends using the cp or rsync commands to move data between various spaces on GLADE. To move data from old work spaces to new work spaces, for example, use the following recursive copy:

cp -a -r /glade/p_old/work/${USER}/data_directory /glade/work/${USER}

For transfers to and from the new Campaign Storage, and for large transfers to file systems at other sites, CISL still recommends Globus as the easy, fast, and secure option to move data. However, it is important to prepare your data for transfer by identifying and managing your symbolic links. There are two approaches you can take:

  1. If you wish to preserve the linked data, simply replace the symbolic link with the target data using cp.

  2. If you wish to preserve the symbolic links themselves, the easiest approach is to create a tarball containing all of the files you want to copy (including the symbolic links), and then use Globus to transfer that tarball to the target file system.

If you need guidance on which approach is the best for your particular data transfer, please contact cislhelp@ucar.edu with questions.

 

July 20, 2018

The Cheyenne, Geyser, and Caldera clusters will be unavailable Tuesday, July 24, starting at approximately 6 a.m. MDT to allow CISL staff to update key system software components. The outages are expected to last until approximately 6 p.m. Tuesday evening but every effort will be made to return the systems to service as soon as possible.

To minimize impact to running jobs, all Cheyenne batch queues will be suspended at approximately 6:00 p.m. tonight. Running jobs will not be interrupted. After the queues are suspended users will be still able to submit batch jobs but those jobs will be held until the system is returned to service Tuesday evening.  A system reservation will be created on Geyser and Caldera to prevent batch jobs from executing past 6:00 a.m. Tuesday morning.

All batch jobs and interactive processes that are still executing when the outages begin will be killed.  The clusters’ login nodes will be unavailable throughout the outages.

CISL will inform users through the Notifier service when the systems are restored.

Pages