SahasraT

CRAY CX40

Introduction

Cray XC40 system is the latest entry to SERC’s HPC class of systems. This Cray XC40 is a system that combines the capabilities of Intel’s latest Xeon Haswell processors for the CPU cluster and Nvidia’s K40 series of GPU cards and Intel Xeon-Phi Processor 7210 for the accelerator cluster connected using Crays’s own Aries high speed interconnect on a dragonfly topology with DDN’s high performance storage units.

CPU only cluster: Intel Haswell 2.5 GHz based CPU cluster with 1376 nodes; each node has 2 CPU sockets with 12 cores each, 128GB RAM and connected using Cray Aries interconnect.

Accelerator based clusters:  Two accelerator clusters, one with Nvidia GPU cards (44 nodes) and the other with Intel Xeon-Phi Processor 7210(24 nodes). Each node with Nvidia GPU on the accelerator cluster has Intel IvyBridge 2.4 GHz based single CPU socket with 12 cores. The GPU card is Tesla K40 card with 2880 cores and 12GB device memory. Nodes with Intel Xeon-Phi Processor 7210 are self-hosting nodes, it has 64 cores with 16GB accelerator memory and 96GB DDR4 memory per node.

High Speed Storage: 2 PB usable space provided by high speed DDN storage unit supporting Cray’s parallel Lustre filesystem.

Software environment: The entire system is built to operate using Cray’s customized Linux OS, called Cray Linux Environment, for the cluster supporting architecture specific compilers from Cray as well as Intel and open-source based Gnu compilers. System also hosts architecture specific parallel libraries like OpenMP, MPI, CUDA and Intel Cluster software. Extensive range of parallel Scientific and Mathematical libraries like BLAS, LAPACK,Scalapack, fftw, hdf5, netcdf, PETSc, Trilinos etc. are also available on the system. To facilitate users with parallel program development latest version of the DDT parallel debugger and profiler is enabled for use on the system.

The system is accessible using login nodes with the domain name xc40.serc.iisc.ernet.in and can be used for launching jobs in batch execution mode. User is expected to ssh into the login nodes and create appropriate job scripts and submit these scripts to the PBSPro batch scheduling software. The parallel file system is accessible to users through job scripts and is meant for using during job execution.

Vendor

OEM – Cray Inc
Authorised Seller – Cray Inc, Seattle, WA 98164, USA

SahasraT Current Status 

SahasraT User Portal new

Comprehensive System Overview

System Architecture and Configuration

Programming Environment

Commonly used Software Applications

Ongoing Training Sessions

  • SERC User Training Session(19/01/2015) (PDF) (Video)
  • Cray-Intel Training Sessions
  • Cray Workshop on High Performance Computing Tools
  • Allinea DDT Workshop (29/02/2016)(PDF) (Video)
  • Workshop on Science Using Sahasrat (11/05/2016)
    • A Study with an Earth System Model – Ravi S. Nanjundiah, Prof. (PDF)
    • Defects in Materials – Manish Jain (PDF)
    • High throughput Computational design of Thermoelectric Materials – Abhishek K. Singh (PPT)
    • Cray Roadmap to Exascale – Hee-Sik Kim, Cray APAC (PDF)
  • Workshop on High Performance Computing & Parallel Programming Concepts
    • Day 1(10/09/2016) (PDF) new

Accessing the system

The XC40 has login nodes, through which the user can access the machine and submit jobs. The machine is accessible for login using ssh from inside IISc network (ssh computational_id@sahasrat.serc.iisc.ernet.in). The machine can be accessed after applying for Cray XC40, for which:

Fill the online HPC application form here and submit at Room no:117, SERC.

HPC Application form must be duly signed by your Advisor/Research Supervisor.

Helpdesk

For any queries, email to helpdesk_serc or please contact System Administrator, #109,SERC.