The Aurora HPC 10-10 excels in density, energy efficiency and reliability: it can thermally manage the top of the Xeon E5 series, the E5-2687W at 3.1 Ghz (150 W TDP) this means a standard rack of the Aurora HPC 10-10 can propel 100 Tflops of pure CPU computational power at 110 Kw of peak consumption. That is 1 Petflop in 15 m2 or 160 sqft. Such densities are possible because of a hot cooling system (coolant at 50°C +) that has improved compared to the previous AU 5600 supercomputer, becoming even more efficient in extracting heat from the system.
The HPC 10-10 inherits the efficient power conversion of the AU 5600 and allow data center PUEs as low as 1.05. This means great energy savings and together with a very respectable 900 Mflops/W this means also the Aurora high performance computer is one of most efficient and greenest systems in the market
Eurotech has leveraged their long experience in developing embedded systems, to bring to HPC1 0-10 a high degree of RAS (reliability, availability and serviceability). The 10-10 has no moving parts, no hot spots, no noise. It has 3 independent sensor networks, soldered memory and its nodes are very manageable 50x16 cm (19”x6”) blades, hot pluggable despite being liquid cooled. Also, there is limited cabling inside each rack, where the backplanes are handling most of the I/O communication
High computational power
Aurora uses the latest and fastest technology available. A high end solution, capable to deliver unparalleled computational power, maintaining all of the flexibility and compatibility of a CPU based x86 solution.
High packaging density
Aurora systems are the best in class in terms of computing density per unit rack. Aurora mounting Sandy Bridge can have up to to 4096 cores/512 CPUs/256 blades hosted in a single 48U rack. In other words, this means over 66 Tflops per m2, or 2 Petaflops in a studio flat! A reduced floor occupation and easier installation.
Hot liquid cooling
Aurora removes component generated heat using hot liquid cooling (water up to 50 °C), with no need for air conditioning, in any climate zone. This allows reaching a data center PUE as low as 1.05.
Thermal Energy Reuse
Each Aurora computational node can produce a temperature gap between 3 °C and 5 °C in the cooling liquid. Setting up more the racks in a multistage heating configuration, it is possible to warm up the coolant enough to be utilized for producing air conditioning, generating electricity or simply heating a building.
Cooling directly on the components
Direct on component liquid cooling, a feature that limits on board hot spots.
Reliability and availability
Quality, on component liquid cooling, redundancy of all critical components, vibration less functioning, solid state storage, temperature control, monitoring networks (IPMI), ease of maintenance and last but not least Eurotech HPC experience contribute to high reliability and longer system availability.
No moving parts
Aurora doesn’t shake, rattle or make noise. It does not have moving fans or require a dedicated room for installation.
Unified network architecture
An Infiniband switched network coexists with an optional FPGA driven 3D Torus nearest neighbor network.
Synchronization networks
Three independent synchronization networks (system, subdomain and local) preserve efficiency at Petascale by guaranteeing that the communication and the scheduling of all nodes are automatically handled.
Excellent design and ease of maintainability
While the Aurora supercomputers show an appealing design, they have been thought to guarantee ease of access and operation to easy maintenance.
Computing performance
- from 42 to over 100 Tflops/rack
Power consumption
- 340W-390W/node, 11.2kW/chassis, 90kW - 100 KW/rack typ.
CPU
- Intel Xeon E5 and Xeon 5600 series
- from 3072 to 4096 cores/rack
Memory
- 8/16/32 GB or above soldered on board ECC DDR3 SDRAM per node
- 6/12/24 GB or above soldered on board ECC DDR3 SDRAM per node
- Memory bandwidth: 40 GB per second per node
Local Storage
- 80 / 160 / 256 / 512 / 1024 GB 1.8” SATA disk
- 80 / 160 / 256 GB 1.8” SATA SSD
Interconnect
- QDR Infiniband port per node (BW: 40Gbps, Latency <2us)
- 20+20 QDR IB ports (QSFP connections) per chassis
- OPTIONAL: 1+1 3D Torus nearest neighbour switchless per node (BW:60+60Gbps, Latency: ~1us )
Power supply
- External ACDC converter (85-300VAC to 48VDC), n+1 redundant, 97% efficiency
- In rack DCDC trays (48VDC to 10VDC), 97% efficiency
Cooling
- Entirely liquid cooled, ambient heat spillage <2%
Monitoring and Control
- IPMI 960 measurement points per rack.
Physical Characteristics
* Dimensions (Rack): H 2260mm x W 1095mm x D 1500mm
- Weight (Maximum): 1560kg (3440 lbs.) per fully populated rack
- Acoustical Noise Level: <20 dB at 1 m
Adoption of Intel processors ensures compatibility with a vast range of applications, tools, OS's and specific HPC middleware. The advantage of being x86-based allows Aurora to have an almost unlimited choice of compilers, debuggers, libraries, applications, OS's, specific HPC middleware, clustering and administration tools, open source or proprietary.
OS
- RHEL
- SUSE/SLES
- CentOS
- Scientific Linux and others
Compilers
- Intel Cluster Toolkit
- GNU toolchain
- Portland CDK
MPI
- Intel MPI
- OpenMPI
- mpich
- Portland MVAPICH
Debuggers and performance tools
- Totalview
- DDT
- Intel Trace Analyzer and Collector
- Intel VTune
Math Libraries
Compatibility of Math libraries implementation specific
- Intel MKL
- IMSL (with ICT requires adaptation)
- NAG
Resource Management/ Deployment
- OpenPBS, SunGridEngine
- PBS Professional
- Bright Cluster Manager
- Platform LSF/Cluster Manager
- Rocks, Rocks+
- Torque, MOAB, xCAT
- Condor
- Parastation
- xCAT
Distributed File System
- Lustre over QDR Infiniband, either via OFED or TCP.
- pNFS, panFS, GPFS under test
Maintenance and Management