Daily Bulletin Archive

Dec. 16, 2011

In concert with the Unidata Seminar Series, CISL presented a briefing on the Yellowstone system, NCAR's forthcoming data-intensive petascale environment, November 29, 2011.  The briefing discussed the Yellowstone hardware, including the disk resource and data analysis and visualization clusters, the software environment, and the allocations opportunities available to the various user communities.

Listed below is the recorded webcast link for those unable to attend in person.

http://www.unidata.ucar.edu/community/seminars/index.html#2011

For more details on Yellowstone, see:
http://www2.cisl.ucar.edu/resources/yellowstone

Dec. 16, 2011

/blhome under Bluefire has been available for you to copy any of your files for a little over a month.  We plan to decommission this old home area on Tuesday, December 13. We ask you to copy any of the files from /blhome before it is decommissioned.

Dec. 14, 2011

 

UPDATE: We're reposting this announcement to highlight a clarification to the ASD eligibility requirements posted online. Eligible projects may span all of the geosciences and supporting computational sciences. 

 

CISL is pleased to announce the availability of dedicated, large-scale resources on NCAR's recently announced Yellowstone system from approximately May through July 2012 as part of its Accelerated Scientific Discovery (ASD) initiative for university and NCAR research.

NSF-supported university researchers interested in applying for ASD computational resources can view eligibility and proposal requirements at: https://www2.cisl.ucar.edu/docs/allocations/asd. Applications are due January 13, 2012.

NCAR researchers interested in applying for ASD computational resources may view eligibility and proposal requirements at: https://www2.cisl.ucar.edu/docs/allocations/2012nsc/instructions. NCAR ASD projects will be chosen from submissions for NCAR Strategic Capability (NSC) projects, also due on January 13, 2012.

The Yellowstone system will be a 1.6-petaflops IBM iDataPlex cluster with 74,592 Intel Sandy Bridge EP cores, 149.2 TB of memory, and 11 PB of parallel disk storage. Yellowstone is expected to deliver nearly 30 times the capacity of NCAR's current Bluefire system.

View the UCAR press release announcing Yellowstone at:
http://www2.ucar.edu/news/5662/ncar-selects-ibm-supercomputer-system.

For more details on Yellowstone, see:
http://www2.cisl.ucar.edu/resources/yellowstone

Questions on the Yellowstone system, ASD allocations, and the user transition process may be directed to cislhelp@ucar.edu.

Dec. 8, 2011

CISL Help Desk will be unable to assist walk-ins on Wednesday, December 4.
The Help Desk team and HPC Consultants will be on duty, and users may continue to send
email to cislhelp@ucar.edu or call 303-497-2400. Your questions will be addressed as soon as possible.

Dec. 8, 2011

NCAR users attending the American Geophysical Union (AGU) Fall Meeting in San Francisco, Calif., next week, are encouraged to stop by the NCAR booth (#1222) to learn more about our forthcoming data-centric Yellowstone environment.

Davide Del Vento, one of CISL's HPC consultants, will showcase the Yellowstone system and be on hand to answer user questions. Presentations are scheduled for Wednesday (Dec. 7) and Thursday (Dec. 8) at 11 a.m. and 3 p.m. both days.

The Yellowstone environment at NWSC will greatly expand the opportunities for researchers in the geosciences and related fields. The new system will have more than 600 million core-hours available for allocation each year.

Nov. 29, 2011

Yellowstone Briefing
FL2 Large Auditorium (1022) or via Webcast
November 29, 2011 @ 2 p.m. MST

In concert with the Unidata Seminar Series, CISL presented a briefing on the Yellowstone system, NCAR's forthcoming data-intensive petascale environment, November 29, 2011, at 2 p.m. MST in the NCAR Foothills-2 Large Auditorium (1022) or via Webcast at  http://www.fin.ucar.edu/it/mms/fl-live.htm. The briefing discussed the Yellowstone hardware, including the disk resource and data analysis and visualization clusters, the software environment, and the allocations opportunities available to the various user communities.

The event was webcast and recorded for those unable to attend in person.

NCAR will continue to offer several allocations opportunities, the first of which will be for the Accelerated Scientific Discovery (ASD) period. A small number of large-scale, fast-turnaround projects in the geosciences will be selected for early, priority access from May-July 2012. The submission deadline for ASD projects will be January 13, 2012.

Yellowstone will be the inaugural computing resource at the NCAR-Wyoming Supercomputing Center (NWSC) in Cheyenne, Wyoming. Yellowstone will be a 1.6-petaflops IBM iDataPlex cluster with 74,592 processor cores and 149.2 TB of memory. The Yellowstone environment at NWSC will greatly expand the opportunities for researchers in the geosciences, with more than 600 million core-hours available each year.

For more details on Yellowstone, see:
http://www2.cisl.ucar.edu/resources/yellowstone

Nov. 25, 2011

NCAR has announced that IBM will install a massive central file and data storage system, a petascale high-performance computational cluster, and a system for visualizing the data at the new NCAR-Wyoming Supercomputing Center (NWSC). Equipment delivery will commence in early 2012, with production computing operations planned for summer 2012.

The new system, named Yellowstone, and its data-centric supercomputing environment will be the NWSC's inaugural system. Yellowstone is expected to deliver 1.6 petaflops peak computing performance and provide nearly 30 times the computational performance of Bluefire. Yellowstone will be accompanied by a nearly 11-petabyte disk system.

For the full announcement, see the UCAR Communications site at:

https://www2.ucar.edu/news/5662/ncar-selects-ibm-supercomputer-system

For technical details, the CISL site has published initial information at:

http://www2.cisl.ucar.edu/resources/yellowstone.

Nov. 10, 2011

As part of NCAR's participation in the XSEDE collaboration (www.xsede.org), CISL is pleased to augment our training offerings with relevant courses offered by XSEDE partners. The following Fortran course from TACC is offered onsite in Austin, Texas, or can be viewed via webcast.

Fortran 90/95/2003 for HPC

November 10, 2011 (Thursday)
1 p.m. – 5 p.m. (CT)
J.J. Pickle Research Campus
ROC 1.603
10100 Burnet Rd.
Austin, TX 78758

Fortran is a modern language that is reviewed and updated regularly to meet the needs of the scientific community. It facilitates a traditional procedural programming style, but also supports object-oriented programming similar to C++.

Fortran programming skills are highly useful for developing new applications that achieve excellent performance, and for working with a large body of existing scientific codes that have been written in Fortran. These skills can be directly applied to using resources at computing centers and on any Linux/Unix/Windows environment. The class is intended for the intermediate user wishing to gain expertise in Fortran90/95/2003/2008 programming.

This class will be webcast.

To more information and to register, see https://www.xsede.org/web/xup/course-calendar. (Registration requires creation of an XSEDE portal account.)

Please submit any questions you may have via the XSEDE User Portal. https://portal.xsede.org/help-desk.

Nov. 1, 2011

CISL now offers the HPSS Tape Archiver (HTAR) utility, which lets users package files into a single archive file for efficient transfer to HPSS. With HTAR, there is no need to create and store archive files locally.

Documentation for this new utility is available here:

http://www2.cisl.ucar.edu/docs/hpss/htar

In addition to packaging files and transferring archive files to HPSS, HTAR creates and saves an index file in HPSS. It also allows retrieval of specified files from HPSS without copying the entire archive file to a local file system.

Oct. 20, 2011

Recently CISL warned us about problems in older versions of NetCDF and attempted an update of the NetCDF libraries on bluefire. Because this change caused a disruption with currently running cases, and because the problem was rare and as far as we were aware we had never seen the issue, CSEG asked that the change be backed out and the older version be restored on bluefire. We have now found a case in which the old NetCDF file is causing a corruption of CESM output files and are thus recommending all CESM users make the following changes and recompile.

In file: scripts/ccsm_utils/Machines/Macros.bluefire replace the (-) lines with the (+) lines:

-NETCDF_PATH := /usr/local

+NETCDF_PATH := $(NETCDF)

-LDFLAGS := -q64 -bdatapsize:64K -bstackpsize:64K -btextpsize:32K

+LDFLAGS := -q64 -bdatapsize:64K -bstackpsize:64K -btextpsize:32K -lnetcdff

In file: scripts/ccsm_utils/Machines/env_machopts.bluefire

append the following two lines:

source /contrib/Modules/3.2.6/init/csh

module load netcdf/4.1.3_seq

This change should apply for all ccsm4 and newer versions. If you are using an older version please contact us. We apologize for the inconvenience, please contact Jim Edwards jedwards@ucar.edu if you have questions or concerns.

Pages