Daily Bulletin Archive

Mar. 5, 2018

No Downtime: Cheyenne, GLADE, Geyser_Calder, HPSS

Mar. 4, 2018

Users of the NCAR/CISL High Performance Storage System (HPSS) whose storage allocations are overspent as of Monday, April 2, will receive error messages when they try to write files to that system and those transfers will fail. Once an allocation is overspent, users will need to reduce their holdings before they can write additional files. Some users may need to modify their workflows to ensure that archive space is available, detect error messages, and confirm execution of transfers to HPSS.

To check the status of your HPSS allocation, log in to the Systems Accounting Manager (sam.ucar.edu) and select Reports, then My Account Statements. The accounting statements are updated weekly, so the most recent writes or deletions may not be reflected until several days after they are made.

Additional details and guidance will be available soon.

Mar. 1, 2018

For university researchers who are interested in or planning to apply for large-scale Cheyenne allocation opportunities, Dave Hart, NCAR's User Services manager, will host an online Q&A session at 2 p.m. MST on Thursday, March 1.

The session will include a brief overview of the NCAR/CISL supercomputing and storage systems, tips for writing successful allocation requests, and an opportunity to ask questions.

To register for the webcast, please use this form. The session will be recorded.

Feb. 28, 2018

HPSS downtime: Wednesday, Feb. 28th 7:00 a.m. - 10:00 a.m.

No downtime: Cheyenne, GLADE, Geyser_Caldera

Feb. 26, 2018

User sessions that consume excessive resources on the Cheyenne system’s login nodes will be killed automatically beginning Monday, February 26, to ensure an appropriate balance between user convenience and login node performance. Users whose sessions are killed will be notified by email.

Misuse of the login nodes can significantly slow response times and increase the difficulty of using the nodes for their main purposes, which include submitting batch jobs, editing scripts, and other processes that consume only modest resources. Some Cheyenne users have been running intense computing, processing, file transfer, and compilation jobs from the command line on those nodes.

Users are encouraged to compile large codes on the Cheyenne batch nodes or the Geyser or Caldera clusters, depending on where they want to run their programs. CISL provides the qcmd script for running CESM and WRF builds and other compiles as well as running compute jobs on batch nodes. Other resource-intensive work such as R and Python jobs that use large amounts of memory and/or processing power can be run efficiently in the Cheyenne “share” queue. Users can contact the Consulting Services Group for assistance.

Feb. 23, 2018

A job-dependency issue in the PBS Pro workload management system that is used for scheduling jobs on Cheyenne sometimes mistakenly allows dependent jobs to run out of sequence. This occurs when such jobs that are in hold status (H) are released.

CISL and the vendor are working on a solution. In the meantime, CISL recommends submitting dependent jobs manually as their parent jobs finish, particularly if running them out of sequence will cause extra cleanup work or damage control. Contact the CISL Consulting Services Group with any questions or requests for assistance.

Feb. 22, 2018

Data sets that are provided to researchers through the CMIP Analysis Platform can now be found on the GLADE disk storage system in /glade2/collections/cmip. The original location (/glade/p/CMIP) will be removed on February 28.

By hosting climate data on GLADE, the CMIP Analysis Platform enables researchers  to work with it on the Geyser and Caldera analysis and visualization clusters without needing to transfer large data sets from Earth System Grid Federation (ESGF) sites to their local machines.

See Adding data sets to request the addition of data sets that are not already available on GLADE.

Feb. 22, 2018

Documentation for how to use CISL’s peak_memusage tool now includes information about running it with Slurm jobs on Geyser and Caldera. Examples of PBS sample scripts for Cheyenne jobs also have been updated. The utility helps users determine how much memory a program needs in order to run successfully. See Checking memory use for details

Feb. 22, 2018

Cheyenne users have increasingly been misusing the system’s login nodes by running intense computing, processing, file transfer, and compilation jobs from the command line on those nodes. This significantly slows response time for others and increases the difficulty of using the login nodes for their main purposes, which include logging in, editing scripts, and other processes that consume only modest resources.

As noted here, use of the login nodes is restricted to running processes that do not consume excessive resources in order to ensure an appropriate balance between user convenience and login node performance. As the situation has become acute recently, users who run jobs that consume excessive resources on the Cheyenne login nodes will have their jobs killed.

Users are encouraged to compile on the Cheyenne batch nodes or the Geyser or Caldera clusters, depending on where they want to run their programs. CISL provides the qcmd script for running CESM and WRF builds and other compiles in addition to compute jobs on batch nodes. Other resource-intensive work such as R and Python jobs that spawn hundreds of files can be run efficiently in the Cheyenne “share” queue. Large file transfers are best done using Globus.

Contact the Consulting Services Group for information if you need help using the Cheyenne batch queues or Globus, or if you would like to discuss what is meant by modest usage of the login nodes.

Feb. 20, 2018

CISL will reactivate the purge policy for the GLADE scratch file space on Wednesday, February 7. The purge policy was turned off following the December 30 power outage at the NWSC facility so that users would not suddenly lose files when Cheyenne, Geyser, Caldera, and Glade were restored to service.

The purge policy data-retention limit will be increased from 45 days to 60 days and use two time and date factors: a file’s creation date and its last access date. Previously only the last access date was considered.

Files that were created more than 60 days ago and have not been accessed for more than 60 days will be deleted. CISL monitors scratch space usage carefully and reserves the right to decrease the 60-day limit as usage increases. Users will be informed of any change to the purge policy.

GLADE scratch space is for temporary, short-term use and not intended for long-term storage needs.

Pages