Performance – Up to 1PFLOP/s per rack (peak for Intel® Xeon PhiTM 72xx nodes)
Save space – 288 blades in a 1.5m2 footprint rack
Save energy – more than 7 Gflop/s per Watt for processor boards – data center PUE of 1.05 thanks to 2nd Gen Aurora Direct Hot Water Cooling. No need for air conditioning, up to 50% less energy consumed
Modularity and Flexibility – Mix node types in the same chassis leveraging standard, compatible and interoperable components
Aurora Direct Hot Water Cooling – All components are cooled by water, temperature from 18°C to 52°C at variable flow rates
Reliability– Power measurement, liquid cooling, integrity, no moving parts, no hot spots
CONFIGURATION
18 nodes per chassis
16 chassis per rack
PERFORMANCE
Up to 1PFLOP/s per rack (peak for Intel® Xeon PhiTM 72xx nodes)
CHASSIS
18 node blades
PCIe backplane
Rootcard with 36 PCIe slots, chassis controller and GigE switch
CPU BLADES
2x Intel® Xeon E5 26xx v4 processors
4 memory channels per CPU
256 GB memory per node
MANYCORE CPU BLADES
1x Intel® Xeon Phi™ 72xx (or 72xx-F) processor
6 memory channels per CPU, 192 GB per node
ACCELERATED BLADES
2x Intel® Xeon E5 26xx v4 processors
Paired with accelerator blade in adjacent slot:
2x Nvidia® Tesla® K20, K40, K80, or Intel® Xeon Phi™ x72 co-processor
LOCAL STORAGE mSATA, NVMe
INTERCONNECTS
Mellanox IB FDR/EDR
Intel® Omni-Path
EXTOLL
OPERATING SYSTEM
Cent OS, RedHat or Suse
POWER AND COOLING
Aurora Total Liquid Cooling: Direct hot water cooling of all components (processors, memories, PCB, power converters, switches)
Up to 9 kW per chassis, up to 140 kW per rack (peak)
SOFTWARE