Daily Bulletin Archive

Jul. 16, 2018

Containers are a hot topic in high-performance and scientific computing, but while they can provide significant advantages they don't always live up to the hype. That’s why CISL is offering a hands-on class, “Containers and How They Work,” from 9 a.m. to noon MDT on Friday, July 20, at the NCAR Mesa Lab in Boulder.

The course explains what containers are and how they work, and it surveys some popular implementations with an eye toward supporting scientific workloads. Security and other operational concerns will also be covered for cluster administrators who are thinking about supporting containerized workloads on their systems. Topics to be covered include:

  • What are containers and how do they work?

  • Image formats (tar/filesystem/overlayfs/dense filesystem image/singularity)

  • Build vs. run [singularityhub]

  • Platform independence/reproducibility

  • User namespaces and rootless containers

  • Scheduler integration

  • Container runtimes (Docker, Charliecloud, Inception, Singularity, others)

  • OCI/Standards/runc

  • Applications

  • Education/outreach

  • Cloud

  • Reproducible science

The class is intended for anyone who is planning to deploy applications and create application environments using containers; developers and systems support staff getting started with containers; and others interested in learning about containers. Participants should be familiar with Linux and system operations and should bring a laptop and authentication token for connecting to Geyser and Caldera. Laptops should be fully charged as there may not be enough power receptacles in the seminar room.

Please use this form to register so CISL knows how many participants to expect. Space is limited to 50 participants and registration is open through July 16.

Jul. 13, 2018

Changes to the GLADE project space being implemented on Tuesday, July 10, continue the evolution of CISL’s storage architecture and user environment as announced in April. Users should take note of the changes for their impact on workflows and scripts.

Here are the key changes to be aware of:

  • On July 10, the present /glade/p/ spaces will become /glade/p_old/ and be made read/write-only for a period of 30 days. After 30 days, the old spaces will be read-only until they are decommissioned at the end of 2018.

  • Also on July 10, new /glade/p/<entity>/ spaces will be in place for existing projects so users can move their files from /glade/p_old/ to the new file system. An entity can be univ, uwyo, cesm, cisl, nsc, or other designated NCAR lab or special program.


  • NCAR Lab: /glade/p/P12345678 becomes /glade/p_old/P12345678 and the new space is /glade/p/cisl/P12345678

  • University: /glade/p/UABC1234 becomes /glade/p_old/UABC1234 and the new space is /glade/p/univ/UABC1234

CISL recommends using Globus as the most efficient way to transfer files. The system monitors progress and automatically validates correctness of each file transfer. Users are asked to remove files from /glade/p_old/ once their transfers are complete.

Jul. 13, 2018

The CISL Help Desk and Consulting support will close at 3:00 p.m. Friday so staff members can attend a UCAR function.

Jul. 10, 2018

The Cheyenne, Geyser, and Caldera clusters and the GLADE file system will be unavailable on Tuesday, July 10, starting at approximately 7 a.m. MDT to allow CISL staff to update key system software components. The downtime is expected to last until approximately 6 p.m. but every effort will be made to return the system to service as soon as possible. The updates include installing the latest version of the PBS scheduler, changes to /glade/p/ described in today’s Daily Bulletin, and completing GLADE’s transition to GPFS 5.

A system reservation will prevent batch jobs from executing after 7 a.m. All batch queues will be suspended and the cluster’s login nodes will be unavailable throughout the update period. All batch jobs and interactive processes that are still executing when the outage begins will be killed.

CISL will inform users through the Notifier service when all of the systems are restored.

Jul. 10, 2018

CISL’s documentation for Geyser and Caldera users has been revised to reflect the recent updating of those systems to CentOS 7. Key differences  include the following:

  • Procedures for loading Python modules were changed to match the procedures used on Cheyenne.

  • Earlier example scripts included a source command for initializing the Slurm environment. That command is no longer needed and should be removed from scripts used in the CentOS 7 environment. (Updated examples are on this page.)

  • The first line of example bash scripts for Slurm jobs has been revised to include the -l option, which now is required to initialize the environment.

  • Some modules for outdated software are no longer available, so affected scripts should be revised to specify the newer versions.

  • The openmpi-slurm module has been renamed to openmpi.

Jul. 8, 2018

Registration is now open for the NCAR/CISL Consulting Services Group’s 45-minute tutorial at 10 a.m. MDT on Monday, July 9. The tutorial will introduce users to the Globus file transfer system and cover the following topics in detail:

  • Using the Globus web and command line interfaces

  • Making transfers between remote Globus endpoints

  • Accessing the new Campaign Storage spaces using Globus

  • Moving data between NCAR HPC systems and local workstations

Register to attend in person—in the Damon Conference Room at NCAR’s Mesa Lab in Boulder—or via webcast by selecting one of these links:

Jul. 8, 2018

The new Campaign Storage file system that was announced recently is now available for production use. NCAR users are advised to contact their lab’s data storage coordinators for details on how to use the lab’s allocated space.

Campaign Storage is accessible using the Globus web and command line interfaces. CISL is offering a tutorial on Monday, July 9, to introduce users to the Globus file transfer system. See this announcement in today’s Daily Bulletin for more details and to register for the tutorial.

Jul. 6, 2018

Nearly all of the researchers who responded to the recent CISL survey about their CMIP6 plans expressed strong interest in using NCAR facilities such as the CMIP Analysis Platform to support their work. Of the 38 respondents, 95% said they were likely to very likely to use those resources.

Researchers were also asked to identify which DECK and CMIP6 historical simulations they anticipate needing for their work. The following shows what percentage of respondents selected each simulation:

  • 100% - Historical simulation using CMIP6 forcings (1850-2014)

  • 68.4% - Pre-industrial control simulation

  • 55.3% - AMIP simulation (~1979-2014)

  • 47.4% - 1%/yr CO2 increase

  • 36.8% - Abrupt 4xCO2 run

The survey also asked researchers to indicate which CMIP-endorsed MIPs they anticipate using. The most frequently selected were:

  • 36.8% - Cordex

  • 36.8% - HighResMIP

  • 28.9% - CMIP Coupled Climate Carbon Cycle MIP

  • 28.9% - RFMIP Radiative Forcing MIP

  • 28.9% - Scenario MIP

The results will help CISL prioritize the addition of CMIP6 data sets to the CMIP Analysis Platform. NCAR CMIP6 data products are expected to be available this summer. Estimated time frames for data from other modeling centers will be published when they become available.

Jul. 2, 2018

CISL is planning to change the default versions of the following software modules on Cheyenne on July 2:

  •  NetCDF from version to 4.6.1
  •  Python from version 2.7.13 to 2.7.14

We recommend that you test your work with the new versions before the changeover date, and report any issues or concerns to cislhelp@ucar.edu. Alternatively, you can specify your own set of default modules using these documented procedures.


Jul. 2, 2018

No downtime: Cheyenne, GLADE, Geyser_Caldera and HPSS