Compute resources at ITA

At ITA we have several small compute clusters. One is generally available for all users, and some are dedicated to projects. Large parallell compute jobs should be run at national and international compute resources (i.e NOTUR, Pleiades, PRACE etc). The local compute clusters are intended for smaller parallel  compute jobs, serial jobs, code testing and data analysis. Some of the nodes have GPUs and some nodes are "fat nodes", i.e. nodes with large amounts of memory.

Beehive

Beehive is the Institute compute cluster. It can be used for all sorts of computing jobs, both serial and parallel. 

Node

CPU type

Mem

Infiniband

Note

beehive - beehive23 2x Intel E5-2680v2, 2.8GHz, 10-core 128GB Mellanox 56 Gb FDR  
beehive26 - beehive31 2x Intel E5-2670, 2.6 GHz, 8-core 128GB Mellanox 56 Gb FDR  
beehive34 - beehive36 2x Intel E5-2690, 2.9 GHz, 8-core 128GB Mellanox 56 Gb FDR  
beehive37-beehive42 2x Intel E5-2667 v2, 3.3 GHz, 8-core 128GB Mellanox 56 Gb FDR  
beehive43 - beehive45 2x Intel E5-2695 v4, 2.1 GHz, 18-core 768GB Mellanox 56 Gb FDR  
beehive46 2x Intel Platinum 8160. 2.1 GHz, 24-core 384GB Mellanox 56 Gb FDR  
beehive47 2x AMD EPYC 7601, 2.2GHz, 32-core 256GB Mellanox 56 Gb FDR  

GPU resource:

Viscacha 2x Intel E5-2695v4, 2.1 Ghz, 18-core 768 GB

2x NVIDIA P100 (3584 core, 16 GB)

Load and status page: beehive.uio.no

Hyades

Hyades is the new Institute compute cluster. It can be used for all sorts of computing jobs, both serial and parallel. 

Node

CPU type

Mem

Infiniband

Note:

hyades - hyades2 2x AMD EPYC 7742, 2.25 GHz, 64-core/128 threads, 256 MB cache each 1TB Mellanox 200 Gb HDR  
hyades3-hyades4 2x AMD EPYC 7742, 2.25 GHz, 64-core/128 threads, 256 MB cache each 512GB Mellanox 200 Gb HDR  
hyades5

2x AMD EPYC3 7763 2.45GHz, 64-core/128 threads, 256GB cache each

512GB Mellanox 200 Gb HDR  
hyades6

2x AMD EPYC3 74F3 3.2GHz, 24 core/ threads, 256MB cache each

512GB Mellanox 200 Gb HDR  
hyades7-hyades16

2x AMD EPYC3 7543 2.8GHz, 32core/64 threads, 256MB cache each

512GB Mellanox 200 Gb HDR  

Load and status page: hyades.uio.no

Owl

The Owl cluster is for Ingunn and Hans Kristian's CMB group.

Node
CPU type
Mem
Infiniband
owl18 - owl24 2x Intel E5-2697v2, 2.7GHz, 12-core 768GB none
owl25 - owl28 4x Intel E7-8870v3, 2.1GHz, 18-core 1,5TB Mellanox 200 Gb HDR
owl29 - owl30 4x Intel E7-4850v4, 2.1GHz, 16-core 1.5TB Mellanox 200 Gb HDR
owl31 - owl35

2x AMD EPYC 7551, 2 GHz, 32-core

256GB Mellanox 200 Gb HDR
owl36 - owl37 2x AMD EPYC 7H12, 2.6 GHz, 64-core 2TB Mellanox 200 Gb HDR

Load and status page: owl.uio.no

Euclid

The euclid cluster is available for members of the Euclid project.

Node
CPU type
Mem
Infiniband
Note:
euclid - euclid16 2x Intel E5-2670, 2.6 GHz, 8-core 128 GB Mellanox 56 Gb FDR  
euclid21 4x Intel E7-8870v3, 2.1GHz, 18-core 1,5 TB Mellanox 56 Gb FDR  
euclid22-euclid32

2x AMD EPYC4 7543 2.8GHz, 32core/64 threads, 256MB cache each

512GB Mellanox 200 Gb HDR  

NOTE: You can not run parallel jobs across 56Gb and 200 Gb Infiniband networks. They are not connected.

Load and status page: euclid.uio.no

Hercules

The hercules cluster is used for the Hinode datacenter and SST (La Palma) data processing.

Node CPU type Mem Infiniband
hercules - hercules14 2x Intel E5-2670, 2.6 GHz, 8-core 128 GB Mellanox 56 Gb FDR
hercules15 4x Intel E5-4640, 2.4 GHz, 8-core 256 GB Mellanox 56 Gb FDR

Load and status page: hercules.uio.no

Pleiades

The pleiades cluster is for SST (La Palma) data processing.

Node
CPU type
Mem
Infiniband
pleiades, pleiades2 2x Intel E5-2695v2, 2.4 GHz, 12-core 128 GB Mellanox 56 Gb FDR
pleiades3 -  pleiades12 2x Intel E5-2650L v4, 1,7 GHz, 14-core 128 GB Mellanox 56 Gb FDR
pleiades13-pleiades29

2x AMD EPYC3 7543, 2.8GHz,32-core/64 threads, 256MB cache each

256 GB None. Waiting for installation. Will be done in October

Load and status page: pleiades.uio.no

Orion

The Orion nodes are available for Sven Wedemeyer's projects

Node CPU type Mem Infiniband
orion 2x AMD EPYC3 7543 2.8GHz, 32core/64 threads, 256MB cache each 512 GB None. Waiting for installation. Will be done in October

Load and status page: orion.uio.no

Eagle

The Eagle nodes are available for RoCS. Note that eagle4-6 have massive amounts of RAM (2 TB) and should be used for jobs in need of that (e.g. debugging Bifrost with 768^3 grid). Eagle is used for automatic testing from GitHub and should not be used for normal jobs.

Node CPU type Mem GPU
Eagle 2x  Intel E5-2697v2 2.7GHz, 12-core 768 GB

 

eagle4, eagle5, eagle6

2x AMD EPYC 7551, 32-core, 2 GHz

2TB  
eagle7 2x AMD EPYC2 7302, 16-core, 3 GHz 512 GB 6x NVIDIA Tesla T4 cards (each with 2560 cores / 16 GB memory)
eagle8 2x AMD EPYC2 7742, 64-core, 2,25 GHz 1TB 2x NVIDIA Tesla V100S cards (each with 5120 cores / 32 GB memory)
eagle9

2x AMD EPYC3 74F3 3.2GHz, 24 core/48 threads, 256 MB cache

128 GB

2x AMD Instinct MI100 cards (each with 120 compute cores/ 32 GB memory) 

Load and status page: eagle.uio.no

By Torben Leifsen
Published July 11, 2017 3:59 PM - Last modified Oct. 24, 2022 2:26 PM