Daily Bulletin Archive

June 28, 2012

Bluefire: Downtime Tuesday, June 26, 6:00am - 1:00pm

HPSS:    Downtime Tuesday, June 26, 7:00am - 10:am

No Scheduled Downtime: DAV, GLADE, Lynx

June 28, 2012

Training OpportunitiesWays to build or refresh your HPC skills will be easy to identify with the addition of a new “Training Opportunities” button on CISL end-user documentation web pages.

The orange button appears on the right side of your screen when there are training materials from NCAR, XSEDE, or HPC University that relate to a page’s content. Just click the button to see a list of the courses.

Give it a try by looking at our Yellowstone Software page, for example.

Let us know what you think. We appreciate your feedback.

June 28, 2012

The Mesa Lab is closed all day today, June 27, to all staff other than critical operations staff due to the Flagstaff Fire west of Boulder. Because even these last few staff will have to leave immediately if a mandatory evacuation is ordered, Bluefire and the other CISL systems will remain down until operations staff are able to safely return. 

The Flagstaff Fire started yesterday afternoon, spreading rapidly and leading UCAR to issue an informal evacuation of NCAR's Mesa Lab. To enable operations staff to leave, CISL shut down all of its major systems, including Bluefire, at 4 p.m. June 26.

UCAR and local fire officials will continue to evaluate the fire situation. CISL will provide further updates about the status of Bluefire and other systems via the Notifier service.

CISL Help Desk and Consulting staff are working remotely today. Users may call 303-497-2400 or email cislhelp@ucar.edu with any questions.


June 28, 2012

CISL Help Desk will be unable to assist walk-ins on Wednesday, June 27.
The Help Desk team and HPC Consultants will be on duty, and users may continue to send
email to cislhelp@ucar.edu or call 303-497-2400. Your questions will be addressed as soon as possible.

June 28, 2012

The CISL User Services Section (USS) would like your help in reviewing end-user documentation for the new Yellowstone HPC, analysis, and visualization clusters. We want to make sure we meet your needs as users of these resources.

There are two ways you can help:

  1. Comment on content that has been published already by using our online Feedback Form. We appreciate any suggestions for improvement.
  2. Review new content before it is published for general consumption. You will occasionally receive draft copy for a new web page and asked for your input.

If you can review some early drafts, let us know through the feedback form or email B.J. Smith, USS documentation writer/editor.

June 22, 2012

NCAR-sponsored users of the Janus cluster have the opportunity to attend a new user training session, June 21, 2012, from 9:30 am to 11:30 am (Mountain Time), offered by the Research Computing group at the University of Colorado, Boulder, and the CISL Consulting Services Group.

The training session will consist of approximately one hour of lecture with an hour set aside for answering user questions. The lecture will cover such topics as logging in, compiling, using Janus dotkits, submitting PBS jobs, and running CESM and WRF on Janus.

The session will be offered both in the Visualization Laboratory at NCAR's Mesa Lab facility and via Adobe Connect. Anyone interested in participating, remotely or on site, should contact csg-train@ucar.edu to receive further meeting details.

If there is a topic that you are particularly interested in, please include that information as well, and we will try to accommodate as many such topics as possible.

June 18, 2012

The 13th Annual WRF Users’ Workshop will take place June 25-29 at the NCAR Center Green Campus in Boulder. The workshop’s objectives are to discuss model development and to evaluate the model's performance. See the announcement for details.


The Weather Research and Forecasting (WRF) model tutorial will be offered in three sessions during a two-week period, July 16-27, at the NCAR Foothills Laboratory. Participants can attend any combination of sessions. Details are available here.

June 13, 2012
 

On June 1, NCAR's Blue Gene/L system, Frost, stopped accepting compute jobs, thus ending the system's more than seven years of service to computational scientists from NCAR, the University of Colorado, Boulder, the former TeraGrid program, and other organizations.

The first rack of the Frost system was delivered March 15, 2005, and accepted March 28, 2005. Initially dedicated to collaborations between NCAR and CU-Boulder, Frost became available to TeraGrid users in July 2007. In September 2009, the system was augmented with three additional racks from the San Diego Supercomputer Center, bringing the system to its final configuration of 8,192 processors. As a single-rack system, Frost delivered approximately a million core-hours per month, which more than tripled following its expansion.

Frost completed its four years of TeraGrid service in July 2011. CISL was able to keep the system running to support a few NCAR and CU-Boulder collaborations, as well as to support the Asteroseismic Modeling Portal (AMP) gateway. Notably, since its TeraGrid retirement, usage of the six-year-old Frost increased, delivering more than 3 million core-hours per month on average to its small set of devoted users.

In total, Frost delivered more than 126 million core-hours to TeraGrid and non-TeraGrid over the 58 months since July 2007.

Frost's file systems will remain available to users through the end of June 2012 to provide access for data migration. Frost's users are being directed to other HPC opportunities, including the CU-NCAR Janus cluster, CISL's Bluefire and forthcoming Yellowstone, and XSEDE (successor to TeraGrid) systems.

 
 
June 8, 2012

When you need to transfer large files or data sets between our Globally Accessible Data Environment (the GLADE centralized file service) and remote destinations such as XSEDE facilities, we recommend using Globus Online. It is a convenient, easy-to-use interface, and offers a feature called Globus Connect for moving files to and from laptop or desktop computers and other systems. More information is available here.

See CISL best practices to learn how to make the best use of your computing and storage allocations.

 
May 31, 2012

CISL recently became aware of an issue related to the accounting information associated with a large number of HPSS files. We learned that, due to some HPSS changes performed in January to allow CISL to maintain consistent group information across the Yellowstone environment, the "billing information" -- the project to which a file is charged -- was also inadvertently changed for some files.

While not all users or files were affected, many were, so we are extending this information to all HPSS users. In addition, the cause has been identified and, although CISL will avoid similar changes in the future, users need to understand the cause because you can affect your own files in the same way using standard HSI commands.

The following key points should help users understand the problem and whether they might have been affected:

1. We emphasize that NO files were altered or lost due to these changes. For some files, only the project to which the files were being charged was changed.

2. If you have only one project to which you can charge HPSS usage, your files were not affected. You can view your available projects and change your default project by visiting the CISL Portal (https://cislportal.ucar.edu/).

3. For users with more than one valid HPSS project, files charging to a non-default project may now be billed to your default project. Users with significant HPSS holdings may want to to check the account/project information for their files. CISL has some information about which users were affected and how many files for each user were affected. However, we do not know exactly which files may have been affected. Please contact cislhelp@ucar.edu for these details.

4. In most cases, the most efficient fix is to use the "chacct" command yourself on affected files or directories. Please see the HPSS documentation at https://www2.cisl.ucar.edu/docs/hpss/projects. Users with very large numbers of files (hundreds of thousands) can contact cislhelp@ucar.edu for assistance.

5. All users should review the HPSS documentation for the "chgrp" and "cp" commands and for setting your "default project." In HPSS, the normal behavior of the chgrp or cp commands also changes the account (i.e., project) for the associated file(s) to the user's HPSS default project. Given the volumes of data likely to be produced from Yellowstone, all users are strongly encouraged to become more familiar with the HSI commands related to accounting.

We regret the error on our part and will provide assistance wherever possible.

Please contact cislhelp@ucar.edu if you have any questions or concerns.

Pages