The Daily Bulletin

Registration is now open for a free, four-hour tutorial on wrf-python from 8 a.m. to noon on Wednesday, March 7, at the NCAR/UCAR Corporate Technical Training Center (CG-2) in Boulder. The tutorial is a beginner-friendly introduction to wrf-python for users of the Python programming language. Seating is limited to 16 students. The deadline for registration is February 21. See this link for more information and registration.

The Geyser and Caldera clusters will be unavailable Monday, January 22, from approximately 10 a.m. until 2 p.m. to allow CISL staff to perform updates on the systems’ networking configurations.

A system reservation will be in place to prevent batch jobs from executing after 10 a.m. Jobs that are running when the outage begins will be killed. No impact to other NCAR HPC systems or the GLADE file system is expected during the outage.

CISL will inform users through the Notifier service when the system is restored. We apologize for any inconvenience this will cause and thank you for your patience.

Registration is now open for an NCAR/CISL series of four one-day workshops on Modern Fortran beginning Tuesday, February 6.

Dan Nagle, CISL Consulting Services Group software engineer and a member of the U.S. Fortran Standards Technical Committee, will provide the training at the NCAR Mesa Lab’s Fleischmann Building in Boulder.

Participants are encouraged to bring their own laptop computers with recent releases of gfortran, mpich, and opencoarrays. Each workshop will begin at 9 a.m. and end at 4 p.m. with an hour break at noon.

  • Scalar Fortran - February 6: Scope, definition, scalar declarations and usage, and interacting with the processor.

  • Vector Fortran - February 20: Arrays, storage order, elemental operations, and array intrinsics.

  • Object-Oriented Fortran - February 27: Derived types, defined operations, defined assignment, and inheritance.

  • Parallel Fortran - March 6: Coarray concepts, declarations, and usage; synchronization and treating failed images.

Use this form to register to attend one or more workshops. The workshops will not be webcast or recorded.

The OU Supercomputing Center for Education & Research is offering a free “Supercomputing in Plain English (SiPE)” workshop in weekly sessions beginning January 23 and running through May 1. Participants can attend live in person on the University of Oklahoma campus in Norman or via videoconference. Each hour-long session begins at 11:30 a.m. Mountain time.

Workshop sessions focus on fundamental High Performance Computing (HPC) issues and topics, including: overview of HPC; the storage hierarchy; instruction-level parallelism; high performance compilers; shared memory parallelism (e.g., OpenMP); distributed parallelism (e.g., MPI); HPC application types and parallel paradigms; multicore optimization; high-throughput computing; accelerator computing (e.g., GPUs); scientific and I/O libraries; and scientific visualization. Slides from previous workshops are available here.

Use this form to register for the semester.

SiPE is targeted at an audience of computer scientists and other scientists and engineers, including undergraduates, graduate students, postdocs, faculty, staff, and professionals. Participants should have some recent programming experience or one semester of coursework in Fortan, C, C++, or Java.

CISL has added a new script for Cheyenne users that simplifies the launching of resource-intensive compilation jobs on the system’s batch nodes. Running qcmd as shown in this new documentation starts a non-interactive job on a single batch node in the Cheyenne "regular" queue with a default wall-clock time of 1 hour. It is recommended for running resource-intensive tasks such as CESM and WRF builds or any compiles with three or more threads.

Reminder: Use of the Cheyenne login nodes, as noted here, is restricted to running processes that do not consume excessive resources. This is to ensure an appropriate balance between user convenience and login node performance. Users are encouraged to compile on the Cheyenne batch nodes or on the Geyser or Caldera clusters, depending on where they want to run their programs.