Daily Bulletin Archive

October 15, 2012

HPC Python Tutorial
October 15, 2012
9 a.m. to 4 p.m. (CT)
Texas Advanced Computing Center
J.J. Pickle Research Campus
ROC 1.900
10100 Burnet Rd.
Austin, TX 78758

This class will be webcast.

Registration will close on Friday, October 12, at 1 pm (CT).

As HPC widens its vision to include big data and non-traditional applications, it must also embrace languages that are easier for the novice, more robust for general computing, and more productive for the expert. One candidate language is Python.  Python is a versatile language with tools as diverse as visualizing large amounts of data, creating innovative user interfaces, and running large distributed jobs. Unfortunately, Python has a reputation for poor performance. In this tutorial, we give a user practical experience using Python for scientific computing tasks. Topics include array computing with NumPy, interactive development with IPython, low-level C linking with Cython, distributed computing with MPI, and performance issues.

The tutorial will feature guest speaker Dr. Travis Oliphant, the author of NumPy and SciPy.  Dr. Oliphant will discuss the use of array computing in Python and his latest creation, Numba, a just-in-time compiler for NumPy.

Recommended prerequisites:

Basic programming knowledge with Python. A good tutorial is available online here:


Remote attendees can install the used libraries versions (all included in Anaconda Pro [0]  or Enthought Python Distribution [1] (which doesn't include mpi4py):

Python 2.7
numpy 1.6
scipy 0.10
IPython 0.12
cython 0.15
mpi4py 1.2.2

[0] https://store.continuum.io/cshop/anaconda
[1] http://enthought.com/products/epd.php

Staff support for remote users will be limited; however, the lecturers will field questions.


Please submit any questions that you may have via the TACC Consulting System.

October 9, 2012

Bluefire: Downtime Tuesday, October 9, 6:00am-1:00pm

HPSS: Downtime Tuesday, October 9, 7:00am-9:00am

No Scheduled Downtime: DAV, GLADE, Lynx

October 5, 2012

An XSEDE tutorial to be webcast Oct. 15, 2012, will give participants practical experience using Python for scientific computing tasks. Topics include array computing with NumPy, interactive development with IPython, low-level C linking with Cython, distributed computing with MPI, and performance issues. Click here for details and registration.

October 1, 2012

Store only the data that you need long-term in the HPSS tape archive. Rather than routinely copying output to HPSS right after completing simulation runs, for example, use your /glade/scratch space for analysis and save the data to HPSS only after post-processing. Using the tape archive only for long-term storage helps conserve your storage allocation and allows the HPSS system to run more efficiently for everyone.

With Yellowstone's 5-PB /glade/scratch file space, 10-TB default quota, and 90-day retention period, CISL encourages users to evaluate and simplify their workflows by cutting out intermediate, temporary data movement steps. 

See CISL best practices for other ways to make the best use of your computing and storage allocations.

October 1, 2012

The 3rd International Workshop on Advances in High-Performance Computational Earth Sciences: Applications and Frameworks (IHPCES) has an open call for papers, whose deadline is scheduled for January 15, 2013. NCAR researchers and users can participate in this workshop by submitting a paper reflecting your current research in the area of computational science. IHPCES 2013 is being held in conjunction with the 13th International Conference on Computational Science (ICCS2013) "Computation at the Frontiers of Science", and will be held in Barcelona, Spain, June 5-7, 2013.

The 3rd IHPCES workshop provides a forum for presentation and discussion of state-of-the-art research in high-performance computational earth sciences. Emphasis will be on novel advanced high-performance computational algorithms, formulations and simulations, as well as the related issues for computational environments and infrastructure for development of high-performance computational earth sciences. The workshop facilitates communication between earth scientists, applied mathematicians, computational and computer scientists and presents a unique opportunity for them to exchange advanced knowledge, insights and science discoveries. With the imminent arrival of the exascale era, strong multidisciplinary collaborations between these diverse scientific groups are critical for the successful development of high-performance computational earth sciences applications. Presentations and audience representation from the broad earth sciences community is strongly encouraged. Contributions are solicited in (but not restricted to) the following areas:

* Large-scale simulations using modern high-end supercomputers in earth sciences, such as atmospheric science, ocean science, solid earth science, and space & planetary science, as well as multi-physics simulations.

* Advanced numerical methods for computational earth sciences, such as FEM, FDM, FVM, BEM/BIEM, Mesh-Free method, Particle method, and etc.

* Numerical algorithms and parallel programming models for computational earth sciences.

* Optimization and reengineering of applications for multi/many-core processors and accelerators.

* Strategy, implementation and applications of pre/post processing and handling of large-scale data sets for computational earth sciences, such as parallel visualization, parallel mesh generation, I/O, data mining and etc.

* Frameworks and tools for development of codes for computational earth sciences on peta/exascale systems.

Authors are invited to submit manuscripts reporting original, unpublished research and recent developments/theoretical considerations in Computational Earth Sciences and related issues by January 15, 2013. Accepted papers will be printed in the conference proceedings of ICCS 2013 published by Elsevier Science in the open-access Procedia Computer Science series. After the conference, selected papers may be invited for a special issue of some major journals, such as Springer's Lecture Notes in Earth Sciences (LNES).

The workshop announcement is posted at: http://hpgeoc.sdsc.edu/IHPCES2013/.

Conveners: Yifeng Cui, University of California at San Diego (yfcui@sdsc.edu) Xing Cai, Simula Research Laboratory, Norway (xingca@simula.no).

September 26, 2012

Starting shortly before midnight (11:45 pm) on Monday, September 24, 2012, the HPSS system will be down for 24 hours to transition the metadata server to NWSC.

We anticipate having HPSS back up within 24 hours. Except for the downtime, the change will be transparent to users.

September 24, 2012

Registration open for October 2012 OpenACC GPU Programming Workshop

One hundred registrants will be accepted for the OpenACC GPU Programming Workshop, to be held October 16 and 17, 2012. The workshop includes hand-on access to Keeneland, the newest XSEDE resource, which is managed by the Georgia Institute of Technology (Georgia Tech) and the National Institute for Computational Sciences, an XSEDE partner institution.

Based on demand, the workshop is scheduled to be held at ten different sites around the country. Anyone interested in participating is asked to follow the link below and then register by clicking on the preferred site. Only the first 100 registrants will be accepted.

The workshop is offered by the Pittsburgh Supercomputing Center, the National Institute for Computational Sciences, and Georgia Tech.

Questions? Contact Tom Maiden at tmaiden@psc.edu.

Register and read more about the workshop at:

OpenACC GPU Programming Workshop

September 24, 2012

As of 8 a.m., Sept. 4, Yellowstone officially entered its acceptance test period. While this represents a major milestone, the first week was not without its challenges. The primary issue was assuring that system state can be preserved on the diskless nodes across a cold start, stabilizing the FDR InfiniBand interconnect and reducing interference in the communications as the workload approached the full 4,500-node capacity of Yellowstone.

IBM and Mellanox have resolved several sources of problems, and since 04:15 Sept. 12, CISL staff have been running the full system workload comprised of six different benchmark codes with a 99.94% success rate. IBM benchmark runs have shown compute performance very close to the expected 28.9 "Bluefire-equivalents," and GLADE benchmark performance has also shown better than 80 GB/s for reads and better than 90 GB/s for writes.

The ATP workload testing will continue for the coming weeks, and IBM and Mellanox will continue to troubleshoot problem nodes, cables, and software configurations to improve the stability and performance of the system. While it is still too early to identify a specific date for Yellowstone to pass acceptance testing, CISL remains confident that early October is the likely timeframe.

September 18, 2012

As a reminder, NCAR's Computational and Information Systems Laboratory (CISL) invites NSF-supported university researchers in the atmospheric, oceanic, and related sciences to submit large allocation requests for the petascale Yellowstone system by September 17, 2012. Revised instructions have been posted for the next round of Large University Allocations, and all requesters are strongly encouraged to review the instructions before preparing their submissions.

These requests will be reviewed by the CISL High-performance computing Advisory Panel (CHAP), and there must be a direct linkage between the NSF award and the computational research being proposed. Please visit http://www2.cisl.ucar.edu/docs/allocations for more university allocation instructions and opportunities.

Allocations will be made on Yellowstone, NCAR's new 1.5-petaflops IBM iDataPlex system, the new data analysis and visualization clusters (Geyser and Caldera), the 11-PB GLADE disk resource, and the HPSS archive. Please see https://www2.cisl.ucar.edu/resources/yellowstone for more system details.

For the much larger Yellowstone resource, the threshold for Small University Allocations has been increased to 200,000 core-hours. Researchers with smaller-scale needs can now submit small allocation requests; see http://www2.cisl.ucar.edu/docs/allocations/university.

Questions may be addressed to: David Hart, User Services Manager, 303-497-1234, dhart@ucar.edu

September 13, 2012

Bluefire downtime Tuesday, September 11 from 6:00am - 1:00pm

HPSS downtime Wednesday, September 12 from 7:00am - 11:00am

No Scheduled Downtime: DAV, GLADE, Lynx