If your need is to
- Reduce the run time of simulations
- Reduce the time to solution for complex problems
- Parallelize many runs
- Run large problems over many cores
But you don’t have the supercomputing budget in your pockets, the Aurora Cube can be a perfect entry level to the high performance world, able to propel your applications by a large amount.
The Aurora Cube:
- Dramatically accelerates applications maintaining compatibility with the majority of applications with no need of porting
- Does not need air conditioning or controlled environments
- Avoids fans and noise thanks to hot liquid cooling
- Saves on energy bills thanks to the world champion in energy efficiency
- Replaces your workstations hosting up to 160 users in a box
- Centralizes departmental IT resources
- Reduces space occupancy with the most dense HPC system in the market.
Powerful
Aurora Cube is able to perform at more than 8 Tflop/s per rack. It has very fast Infiniband QDR interconnects.
Silent
The Aurora Cube is water cooled, so it produces very little noise.
Modular
From 1 to 16 double socket boards in a single unit. More units can be connected together.
Scalable and easy
The Cube doesn’t have complicated and messy cabling and it easily scales joining more modules together.
Compact
Remarkable density, storing 32 powerful Xeon processors in a 65cm (or 95cm with incorporated heat exchanger) rack.
Energy efficient
Aurora G-Station marks a record in energy efficiency with over 3.2 GFlop/s per Watt.
Reliable
No moving parts eliminate vibrations. Direct water cooling avoids hot spots while the soldered memory provides speed and robustness.
Water cooled
All computational nodes (CPUs, coprocessors, memory, board) in the Aurora G-Station are water cooled, without the need of an expensive infrastructure to be deployed. All heat can be taken out of the room just as with a split air conditioner.
Architecture
Each Cube mounts 1 Aurora HPC 10-22 chassis with 16 slots. Each chassis provides electrical, network (IB 40 Gbps) and liquid connections and mounts up to 16 Aurora HPC 10-23 blades. Each Cube is provided with management node, optional Infiniband storage, optional Nvidia grid technology. The Cube is sold in 2 configurations
- Standalone, with embedded liquid cooling
- Split, with computational unit (the server) and external cooling unit
Computing power
- Up to 8.3 Tflop/s per cabinet
Processor
- Up to 16 Intel Xeon E5
Memory
- Up to 128 GB RAM per node ECC DDR3 SDRAM 1866 MT/s
Interconnects
- 40 Gbps QDR Infiniband
- Optional: 1+1 3D Torus or 3D mesh BW:up to 240+240Gbps, Latency: ~1us
- Gb Ethernet available on request
Interfaces
- 20 x 40 Gbps QDR Infiniband
- 2 x 1Gbs Ethernet
- 2 x USB
- 1 x standard VGA
Storage
- Local storage: up to 16 x 4TB GB 2,5” Sata Disk or up to 8 x 512Gb 1,8” microSATA SSD
- Infiniband fast storage: up to 75 TB (expandable)
Cooling
Aurora Direct Hot Liquid Cooling (embedded or external cooling configuration)
Embedded cooling configuration includes: cooling plates, low noise pump, heat exchanger, pipes, distribution bar, low noise fans
External cooling configuration includes:
- Computational unit: cooling plates, connection pipes, distribution bar
- External Unit: heat exchanger, pump
Power consumption
- 7 Kw per fully loaded (16 blades) rack peak - 3 x 230 V 16A
Dimensions
- Standalone configuration. H 90cm x W 65cm x D 80cm (H 27.3” x W 25.3” x D 31.2”)
- Split configuration. Server: H 65 cm x W 65cm x D 75 cm (H 25.3” x W 25.3” x D 25.3”).
Weight (fully loaded) 180 kg (396 pounds)
Software
Operating System
- Linux CentOS, Linux Red Hat, Windows HPC 2008
Cluster Manager
- Bright Cluster Manager, xCAT
Remote access
- NICE DCV, Nvidia Grid
Compilers, Libraries and Tools
- OpenMPI, CUDA, Allinea, Intel Cluster Studio
Job Management
- PBS Professional, TORQUE/MAUI
Parallel File System