Daily Bulletin Archive

June 10, 2019

A major Cheyenne operating system (OS) update is scheduled to begin Monday, June 24, and is expected to be completed by Monday, July 1. The Cheyenne cluster will be unavailable during the update, including the system’s login nodes and all cron services. However, users will be able to access the Casper cluster, GLADE file system, and HPSS through the recently deployed Casper login nodes.

The Cheyenne OS will be updated from SUSE Linux Enterprise Server (SLES) Service Pack 1 to SLES Service Pack 4. The update is required to bring the system up to current security and support levels and is expected to be the last operating system upgrade in Cheyenne’s lifetime.

Most users’ programs and executables will need to be rebuilt following the update, as many system libraries will change. Most scripts should not require modifications but users should test their commonly used scripts thoroughly after Cheyenne is returned to service.

The routine monthly maintenance times that were scheduled for July 2, August 6, and September 3 have been canceled.

June 10, 2019

No scheduled downtime: Cheyenne, Casper, Campaign Storage, GLADE and HPSS.

June 7, 2019

Registration is open for the CMIP6 Hackathon, a hands-on event including tutorials, software development, data analysis, and opportunities for collaboration centered around effective computational workflows and CMIP-related science.

The October 16-18 event will be held concurrently at two locations: the NCAR Mesa Lab in Boulder, Colorado, and the Lamont Doherty Earth Observatory in Palisades, New York. Limited funding is available to support travel and lodging, with preference given to early-career scientists. Participants will be selected on the basis of interests, experience, and potential to contribute to collaborative initiatives. People from observational or application-related backgrounds are encouraged to apply.

The deadline to apply is July 31. See the NCAR CMIP6 Hackathon website for more information.

June 5, 2019

The six Cheyenne login nodes are shared by everyone in the user community, so it’s important to keep in mind their intended purposes. Beyond logging in, users can run short, non-memory-intensive processes on the login nodes. These include tasks such as text editing or running small serial scripts or programs and submitting jobs to run on the compute nodes.

Memory-intensive processes that slow login node performance for all users are killed automatically and the responsible parties are notified by email. One good way to avoid this is by running an interactive job on the batch nodes when you need to do more memory-intensive work.

Learn more about using shared resources.

June 3, 2019

No scheduled downtime: Cheyenne, Casper, Campaign Storage, GLADE and HPSS.

May 30, 2019

Video and slides from the May 24 tutorial for Cheyenne and Casper users have been published in the CISL Course Library. The 75-minute Introduction to Cheyenne for New Users covers basic usage and typical user workflows. Topics discussed include:

  • Overview of compute and storage resources

  • Using software and building applications

  • Scheduling jobs on the batch resources

  • Workflow recommendations and best practices

May 29, 2019

Keeping your files organized in a system like GLADE can greatly simplify your life and save you lots of time and trouble. Say you have 20 TB of Mount Pinatubo volcanic aerosols data. Keep those files in a subdirectory such as /glade/u/home/$USER/pinatubo rather than scattered among unrelated files or in multiple directories. Specialized trees are easier to share with other users and to transfer to other users or projects as necessary.

More about managing files and other best practices.

May 28, 2019

No scheduled downtime: Cheyenne, Casper, Campaign Storage, GLADE and HPSS.

May 23, 2019

CISL has released a pre-production implementation of the popular JupyterHub platform on the Cheyenne system. It is accessible at jupyterhub.ucar.edu with a valid Cheyenne user ID and YubiKey or Duo authentication.

As it is a pre-production offering, no documentation is available at this time for the installation, and CISL cannot guarantee its availability, robustness, or reliability. This JupyterHub instance will remain in a pre-production state for the lifetime of the Cheyenne cluster. A fully supported instance is expected to be available on the Cheyenne system’s successor, which is scheduled to be deployed in 2021.

After coming online earlier this year, NCAR's JupyterHub portal has been leveraged by several workshops, tutorials, and users with varying degrees of success. If you plan to employ JupyterHub in a workshop or tutorial, please notify CISL at cislhelp@ucar.edu at least one week in advance so we can provide the best possible experience and minimize potential issues.

May 20, 2019

When a Globus user authenticates to transfer files to or from an NCAR Campaign Storage endpoint or other endpoint in the Cheyenne/GLADE environment, the default credential lifetime is 24 hours. Users can minimize the need to authenticate that frequently – and simplify both attended and unattended Globus transfers – by extending that lifetime to up to 720 hours (30 days).

Globus file transfers describes using the web and command-line interfaces. It also links to other support resources, including:

Users can also contact the CISL Consulting Services Group for assistance.

Pages