The Institute for CyberScience (ICS) Advanced CyberInfrastructure (ACI) central facility is located on Penn State’s University Park Campus and resides in the Data Center located in the Computer Building. The facility serves as the main infrastructure for the Penn State Research Community. The system consists of 37 racks of equipment located in the two main areas of the building.
ICS-ACI operates 16,000 standard- and high-memory cores to support Penn State research. The most recent procurement provides 15 Dell M1000E Blade server enclosures with 240 M620E Blades, 27 Dell R920 servers, 21 Dell R720 servers, 2 Dell R720XD and 6 Dell R620 servers. This equates to 6,000 of the 16,000 cores.
The computing environment is operated by Red Hat Enterprise Linux 6.
Nine UPS systems serve the needs of the facility. The UPS systems are allocated to the spaces served based on redundancy requirements. A total of 2,223 KW of UPS exist within the Computer Building facility. Two sets of 360 KW UPS systems are paired as 360 KW redundant systems for the general compute area for a total of 720 KW redundancy. The co-location area has two 144 KW UPS systems to provide a total of 144 KW redundant power. The ICS-ACI area has two 202.5 KW UPS systems utilizing single path distribution, and one 90 KW UPS system utilizing single path distribution.
ICS-ACI provides over 2.5 PB of network-attached storage (NAS) to support users’ Home, Work, and Group storage needs and 2.5 PB of general parallel file system (GPFS) for Scratch storage in addition to 4 PB of tape storage for backup purposes. The NAS storage system has multiple Nexenta NAS pools which service users’ Home, Work, and Group storage. The system is deployed on Dell Nexenta reference architecture. The GPFS storage pool is provided on an IBM Parallel Scratch storage system, and the Tape Backup system is an IBM TS3500 Tape system.
The ICS-ACI system has a high-performance Network Fabric built on Brocade VCS fabric technology. It consists of two Brocade 8770 (8 core) switches with 10/40/100GbE network links, two Brocade 8770 (8 core) aggregation switches, and an 80 GbE VLAG between the core and aggregation switches with Mellanox FDR InfiniBand for network connectivity. The IBM GPFS is connected via RDMA.
The entire architecture employs a research-centric software stack available to users. The customized environments allow researchers to deploy software from pre-compiled and tested software catalogs. The software stack supports both commercial and Opensource software.
Click the image for a larger version.
ACI-i (Interactive) Systems
ACI-i provides a set of interactive cores that are configured as common GUI interactive systems. ACI-i is intended for users who want to test their code before running their jobs on ACI-b, users who have small jobs that don’t require the compute resources on ACI-b, and for pre- and post-processing.
ACI-b (Batch) System
ACI-b is configured to support traditional “batch processing” jobs. Users may submit jobs to a variety of queues. Users with specified core allocations are guaranteed start times and job priority. Users may burst up to four times their core allocation within ACI-b. Exceptions may be made upon special request and should be submitted to iASK@ics.psu.edu. There is no wall time limit on guaranteed or burst jobs provided the user has core allocation time remaining. Users without a core allocation can submit their jobs to the ACI-b open queue, which provides computing resources based on system availability.
Help us build the future. Contact ICS at Penn State.
- email: email@example.com
- phone: ICS-ACI Support (i-ASK Center): 814-865-4275 | General Inquiries: 814-867-1467