Daily Bulletin Archive

November 12, 2012

NCL version 6.1.0 will be the default version on the DASG (mirage, etc) and CGD machines starting October 31, 2012.  This version will not be available on bluefire.

For information about what's new in this version, see:


If you have any questions or problems, or if you need to use an older version of NCL, send email to Mary Haley, haley@ucar.edu.

November 12, 2012

Rogue Wave Software is offering an expanded "TVExpress Program" to students, post-docs and faculty members. This program provides a personal license for students until graduation and a 6-month license to post-docs and faculty at no charge. The intent is to reduce the time spent debugging on HPC systems by introducing "Parallel Debugging" into the software development process earlier. This offer is not meant to replace the need to debug at scale on HPC systems. The "personal license" runs on individual multicore workstations, laptops, and PCs running Mac OS or Linux.

Students, post-docs and faculty interested in applying the TotalView Parallel Debugger on their personal Linux workstation or Mac should email Dennis Andrews, dennis.andrews@roguewave.com, for more information. Students should include "TVExpress Student Program" in the subject title. Post-docs and Faculty should include "TVExpress Academic Research Program" in the subject title.

October 29, 2012

Yellowstone, NWSC GLADE, Geyser/Caldera: Downtime extended through Friday p.m. (estimated)

No Scheduled Downtime: Bluefire, HPSS, ML GLADE, Firefly, Lynx, DAV

October 22, 2012

On Saturday, 20 October 2012, the NCAR/CISL Data Center will be shut down to facilitate preventive maintenance on select infrastructure support systems. This is the normal fall season downtime for this purpose. Please note that this activity is a complete shutdown scheduled from 6 a.m. to 6 p.m. Some computing systems, depending on their function will begin their shutdown Friday afternoon.

Most significantly, the Bluefire, Mirage/Storm, HPSS, and GLADE systems will be taken down starting at 5 p.m. on Friday, 19 October. Please make arrangements accordingly.

Due to network maintenance, the following services will experience outages of up to half an hour during the course of the day:

- Wireless networking

- VPN and dialup

- UCAS token authentication (CRYPTOCard, Yubikey)

- UCAS password access for web and mail servers


- UCAR mail servers

Once all maintenance work is completed, the systems will be brought back up in their order of dependency. CISL and Facilities Management & Sustainability maintain the 12-hour window to accomplish all tasks at hand.

Remember to shut down your office systems and/or work station at the Mesa Lab before leaving work on Friday, 19 October 2012.

If you have any questions or concerns please send a message to cislhelp@ucar.edu.

October 19, 2012

Work on Yellowstone has progressed to the point that we can provide reasonably solid schedules for access to Yellowstone and the decommissioning of Bluefire.

Yellowstone was officially accepted by CISL from IBM on September 30. An initial run of the High-Performance Linpack (HPL) benchmark shows that the system can sustain more than 1.2 petaflops.

IBM and CISL are continuing to work at addressing a number of issues to make the system production ready. Selected early users from the Accelerated Scientific Discovery (ASD) projects and NCAR were given access to Yellowstone on October 8. Their feedback and experiences are helping CISL and IBM improve the user experience in the next two weeks and prepare the system for the full user community.

Looking ahead, users should keep in mind the following key dates:

* October 22 (week of): ASD teams will officially begin their proposed activities.

* November 1: Access will be granted to the full user community. Keep in mind that ASD teams will have priority in the queues for the first two months.

* December 31: At the present time, we anticipate keeping Bluefire running through the end of the calendar year.

* Early 2013: Following the decommissioning of Bluefire, the Mirage cluster and GLADE disk system at the Mesa Lab will remain available for a short period so users can migrate files on disk to the new GLADE system at NWSC.

As always, these dates may change if critical situations arise. However, based on early users experiences, we are confident that we will be able to hold closely to this schedule.

October 15, 2012

HPC Python Tutorial
October 15, 2012
9 a.m. to 4 p.m. (CT)
Texas Advanced Computing Center
J.J. Pickle Research Campus
ROC 1.900
10100 Burnet Rd.
Austin, TX 78758

This class will be webcast.

Registration will close on Friday, October 12, at 1 pm (CT).

As HPC widens its vision to include big data and non-traditional applications, it must also embrace languages that are easier for the novice, more robust for general computing, and more productive for the expert. One candidate language is Python.  Python is a versatile language with tools as diverse as visualizing large amounts of data, creating innovative user interfaces, and running large distributed jobs. Unfortunately, Python has a reputation for poor performance. In this tutorial, we give a user practical experience using Python for scientific computing tasks. Topics include array computing with NumPy, interactive development with IPython, low-level C linking with Cython, distributed computing with MPI, and performance issues.

The tutorial will feature guest speaker Dr. Travis Oliphant, the author of NumPy and SciPy.  Dr. Oliphant will discuss the use of array computing in Python and his latest creation, Numba, a just-in-time compiler for NumPy.

Recommended prerequisites:

Basic programming knowledge with Python. A good tutorial is available online here:


Remote attendees can install the used libraries versions (all included in Anaconda Pro [0]  or Enthought Python Distribution [1] (which doesn't include mpi4py):

Python 2.7
numpy 1.6
scipy 0.10
IPython 0.12
cython 0.15
mpi4py 1.2.2

[0] https://store.continuum.io/cshop/anaconda
[1] http://enthought.com/products/epd.php

Staff support for remote users will be limited; however, the lecturers will field questions.


Please submit any questions that you may have via the TACC Consulting System.

October 9, 2012

Bluefire: Downtime Tuesday, October 9, 6:00am-1:00pm

HPSS: Downtime Tuesday, October 9, 7:00am-9:00am

No Scheduled Downtime: DAV, GLADE, Lynx

October 5, 2012

An XSEDE tutorial to be webcast Oct. 15, 2012, will give participants practical experience using Python for scientific computing tasks. Topics include array computing with NumPy, interactive development with IPython, low-level C linking with Cython, distributed computing with MPI, and performance issues. Click here for details and registration.

October 1, 2012

Store only the data that you need long-term in the HPSS tape archive. Rather than routinely copying output to HPSS right after completing simulation runs, for example, use your /glade/scratch space for analysis and save the data to HPSS only after post-processing. Using the tape archive only for long-term storage helps conserve your storage allocation and allows the HPSS system to run more efficiently for everyone.

With Yellowstone's 5-PB /glade/scratch file space, 10-TB default quota, and 90-day retention period, CISL encourages users to evaluate and simplify their workflows by cutting out intermediate, temporary data movement steps. 

See CISL best practices for other ways to make the best use of your computing and storage allocations.

October 1, 2012

The 3rd International Workshop on Advances in High-Performance Computational Earth Sciences: Applications and Frameworks (IHPCES) has an open call for papers, whose deadline is scheduled for January 15, 2013. NCAR researchers and users can participate in this workshop by submitting a paper reflecting your current research in the area of computational science. IHPCES 2013 is being held in conjunction with the 13th International Conference on Computational Science (ICCS2013) "Computation at the Frontiers of Science", and will be held in Barcelona, Spain, June 5-7, 2013.

The 3rd IHPCES workshop provides a forum for presentation and discussion of state-of-the-art research in high-performance computational earth sciences. Emphasis will be on novel advanced high-performance computational algorithms, formulations and simulations, as well as the related issues for computational environments and infrastructure for development of high-performance computational earth sciences. The workshop facilitates communication between earth scientists, applied mathematicians, computational and computer scientists and presents a unique opportunity for them to exchange advanced knowledge, insights and science discoveries. With the imminent arrival of the exascale era, strong multidisciplinary collaborations between these diverse scientific groups are critical for the successful development of high-performance computational earth sciences applications. Presentations and audience representation from the broad earth sciences community is strongly encouraged. Contributions are solicited in (but not restricted to) the following areas:

* Large-scale simulations using modern high-end supercomputers in earth sciences, such as atmospheric science, ocean science, solid earth science, and space & planetary science, as well as multi-physics simulations.

* Advanced numerical methods for computational earth sciences, such as FEM, FDM, FVM, BEM/BIEM, Mesh-Free method, Particle method, and etc.

* Numerical algorithms and parallel programming models for computational earth sciences.

* Optimization and reengineering of applications for multi/many-core processors and accelerators.

* Strategy, implementation and applications of pre/post processing and handling of large-scale data sets for computational earth sciences, such as parallel visualization, parallel mesh generation, I/O, data mining and etc.

* Frameworks and tools for development of codes for computational earth sciences on peta/exascale systems.

Authors are invited to submit manuscripts reporting original, unpublished research and recent developments/theoretical considerations in Computational Earth Sciences and related issues by January 15, 2013. Accepted papers will be printed in the conference proceedings of ICCS 2013 published by Elsevier Science in the open-access Procedia Computer Science series. After the conference, selected papers may be invited for a special issue of some major journals, such as Springer's Lecture Notes in Earth Sciences (LNES).

The workshop announcement is posted at: http://hpgeoc.sdsc.edu/IHPCES2013/.

Conveners: Yifeng Cui, University of California at San Diego (yfcui@sdsc.edu) Xing Cai, Simula Research Laboratory, Norway (xingca@simula.no).