Daily Bulletin Archive

March 29, 2019

The CISL website, the Systems Accounting Manager, Notifier service, ExtraView helpdesk ticketing system, and some other support services will be unavailable from 4:30 p.m. MDT to about midnight on Tuesday, April 2, while network infrastructure work is done. Thank you for your patience as the maintenance tasks are completed.

March 25, 2019

Registration is open for a MATLAB class that CISL is hosting at 9 a.m. on Thursday, March 28, in Boulder. A MathWorks application engineer will present Build and Execute Parallel Applications in MATLAB in the Small Seminar Room, Foothills Lab 2 (FL2-1001).

Class description

In this session we show how to program parallel applications in MATLAB. We introduce high-level programming constructs to easily create parallel applications without low-level programming and show how to offload processor-intensive tasks on a computing resource of your choice – multicore computers, GPUs, or larger resources such as HPC clusters and cloud computing services.

Learning objectives:

  • Program parallel applications in MATLAB

  • Analyze big data sets and solve large scale problems

  • Run parallel applications interactively and as batch jobs

  • Employ multicore processors and GPUs to speed up your computations

  • Off-loading processor-intensive tasks to clusters and cloud computing services

Use this link to register and attend in person. The class will not be recorded or available online.

March 25, 2019

Registration is open for a MATLAB class that CISL is hosting at 9 a.m. on Thursday, March 28, in Boulder. A MathWorks application engineer will present Build and Execute Parallel Applications in MATLAB in the Small Seminar Room, Foothills Lab 2 (FL2-1001).

Class description

In this session we show how to program parallel applications in MATLAB. We introduce high-level programming constructs to easily create parallel applications without low-level programming and show how to offload processor-intensive tasks on a computing resource of your choice – multicore computers, GPUs, or larger resources such as HPC clusters and cloud computing services.

Learning objectives:

  • Program parallel applications in MATLAB

  • Analyze big data sets and solve large scale problems

  • Run parallel applications interactively and as batch jobs

  • Employ multicore processors and GPUs to speed up your computations

  • Off-loading processor-intensive tasks to clusters and cloud computing services

Use this link to register and attend in person. The class will not be recorded or available online.

March 25, 2019

No scheduled downtime: Cheyenne, Casper, Campaign Storage, GLADE and HPSS

March 18, 2019

GLADE users occasionally need to share files with others who have GLADE access but who aren’t in the same UNIX group. Rather than asking CISL to create a special group in such a case, consider using access control lists (ACLs) to provide the necessary permissions.

ACLs are tools for controlling access to files and directories outside of traditional UNIX permissions. The UNIX permissions remain in effect, but users can create ACLs to facilitate short-term file sharing as needed. In the Cheyenne/GLADE environment, the most common use cases are:

  • Sharing files among users in different NCAR labs or universities.

  • Sharing files with short-term visitors, interns, students, or others during a short project period.

See Using access control lists for examples of how to create ACLs to allow other individuals and groups to work with your files, how to propagate permissions to new files and directories, and how to remove ACLs when they are no longer needed.

 

March 18, 2019

No scheduled downtime: Cheyenne, Casper, Campaign Storage, GLADE and HPSS

March 15, 2019

Cheyenne’s default MPI library is now MPT 2.19, which is the version that HPE recommends and supports. Versions 2.15 and 2.16 are no longer compatible with system firmware and have been removed from the system. To mitigate failures from existing scripts and job workflows, the mpt/2.15 and mpt/2.16 modules still exist, but they now point to the MPT 2.19 library and issue a message prompting users to upgrade. The mpt/2.15 and mpt/2.16 modules will be deleted later this year. MPT 2.18 is still available on Cheyenne but is no longer supported by HPE.

The parallel libraries netcdf-mpi and pnetcdf using MPT 2.19 are available for the following, supported versions of the Intel compiler: 16.0.3, 17.0.1 (the default), 18.0.5, and 19.0.2. The libraries have also been built for GCC versions 6.3.0, 7.3.0, and 8.1.0, and for PGI 17.9.

Users should update their scripts and recompile executables to use MPT 2.19 as soon as possible.

March 14, 2019

Reminder: The file retention period for the GLADE scratch space was increased recently from 60 days to 90 days. Individual files will be removed from scratch automatically when they have not been accessed – read, copied or modified – in more than 90 days. To check a file's last access time, run the command ls -ul <filename>.

The updated retention policy is expected to ease user issues related to managing their data holdings and improve overall file system utilization.

March 11, 2019

Do you have some experience as an HPC system administrator and want to expand your skills? Consider attending Intermediate HPC System Administration, a Linux Clusters Institute workshop scheduled for May 13 to 17 at the University of Oklahoma. The workshop will:

  • Strengthen participants’ overall knowledge of HPC system administration.

  • Focus in-depth on file systems and storage, HPC networks, job schedulers, and Ceph.

  • Provide hands-on training and real-life stories from experienced HPC administrators.

See the workshop page for more information and registration. Early bird registration ends April 15.

March 11, 2019

No Scheduled downtime: Cheyenne, Casper, Campaign Storage, HPSS and GLADE

Pages