Compute resources at ITA

At ITA we have several small compute clusters. One is generally available for all users (hyades and beehive), and some are dedicated to projects. Large parallell compute jobs should be run at national and international compute resources (i.e NOTUR, Pleiades, PRACE etc). The local compute clusters are intended for smaller parallel  compute jobs, serial jobs, code testing and data analysis. Some of the nodes have GPUs and some nodes are "fat nodes", i.e. nodes with large amounts of memory.

The standard operating system is Red Hat  Enterprise Linux, version RHEL9, but the three oldest clusters are running RHEL8.

Beehive

Beehive is the Institute compute cluster. It can be used for all sorts of computing jobs, both serial and parallel. RHEL8.

Node

CPU type

Mem

Infiniband

Note

beehive2 - beehive23 2x Intel E5-2680v2, 2.8GHz, 10-core 128GB Mellanox 56 Gb FDR RHEL8
beehive24 - beehive28 2x Intel E5-2670, 2.6 GHz, 8-core 128GB Mellanox 56 Gb FDR RHEL8

 Load and status page: beehive.uio.no

Hyades

Hyades is the new Institute compute cluster. It can be used for all sorts of computing jobs, both serial and parallel. The operating system is RHEL9.

Node

CPU type

Mem

Infiniband

Note:

hyades - hyades2 2x AMD EPYC 7742, 2.25 GHz, 64-core/128 threads, 256 MB cache each 1TB Mellanox 200 Gb HDR  
hyades3-hyades4 2x AMD EPYC 7742, 2.25 GHz, 64-core/128 threads, 256 MB cache each 512GB Mellanox 200 Gb HDR  
hyades5

2x AMD EPYC3 7763 2.45GHz, 64-core/128 threads, 256GB cache each

512GB Mellanox 200 Gb HDR  
hyades6

2x AMD EPYC3 74F3 3.2GHz, 24 core/ threads, 256MB cache each

512GB Mellanox 200 Gb HDR  
hyades7-hyades16

2x AMD EPYC3 7543 2.8GHz, 32core/64 threads, 256MB cache each

512GB Mellanox 200 Gb HDR  
hyades17 - hyades19 2x Intel E5-2695 v4, 2.1 GHz, 18-core 768GB Mellanox 200 Gb HDR Previously beehive43-45
hyades20 2x Intel Platinum 8160. 2.1 GHz, 24-core 384GB Mellanox 200 Gb HDR Previously beehive46
hyades21 2x AMD EPYC 7601, 2.2GHz, 32-core 256GB Mellanox 200 Gb HDR Previously beehive47

GPU resource, RHEL9:

Viscacha 2x Intel E5-2695v4, 2.1 Ghz, 18-core 768 GB

2x NVIDIA P100 (3584 core, 16 GB)

Load and status page:  beehive.uio.no

Owl

The Owl cluster is for Ingunn and Hans Kristian's CMB group. RHEL9

Node
CPU type
Mem
Infiniband
owl18 - owl24 2x Intel E5-2697v2, 2.7GHz, 12-core 768GB RETIRED
owl25 - owl28 4x Intel E7-8870v3, 2.1GHz, 18-core 1,5TB none
owl29 - owl30 4x Intel E7-4850v4, 2.1GHz, 16-core 1.5TB Mellanox 200 Gb HDR
owl31 - owl35

2x AMD EPYC 7551, 2 GHz, 32-core

256GB Mellanox 200 Gb HDR
owl36 - owl37 2x AMD EPYC 7H12, 2.6 GHz, 64-core 2TB Mellanox 200 Gb HDR

Load and status page: owl.uio.no

Euclid

The euclid cluster is available for members of the Euclid project. RHEL8 (old nodes) and RHEL9

Node
CPU type
Mem
Infiniband
Note:
euclid2 - euclid16 2x Intel E5-2670, 2.6 GHz, 8-core 128 GB Mellanox 56 Gb FDR RHEL8
euclid21 4x Intel E7-8870v3, 2.1GHz, 18-core 1,5 TB none RHEL9
euclid22-euclid32

2x AMD EPYC4 7543 2.8GHz, 32core/64 threads, 256MB cache each

512GB Mellanox 200 Gb HDR RHEL9

NOTE: You can not run parallel jobs across 56Gb and 200 Gb Infiniband networks. They are not connected.

Load and status page: euclid.uio.no

Hercules

The hercules cluster is used for the Hinode datacenter and SST (La Palma) data processing. RHEL7.

Node CPU type Mem Infiniband
hercules2 - hercules14 2x Intel E5-2670, 2.6 GHz, 8-core 128 GB Mellanox 56 Gb FDR
hercules15 4x Intel E5-4640, 2.4 GHz, 8-core 256 GB Mellanox 56 Gb FDR

Load and status page: hercules.uio.no

Pleiades

The pleiades cluster is for SST (La Palma) data processing. RHEL9.

Node
CPU type
Mem
Infiniband
pleiades3 -  pleiades12 2x Intel E5-2650L v4, 1,7 GHz, 14-core 128 GB

none

pleiades13-pleiades29

2x AMD EPYC3 7543, 2.8GHz,32-core/64 threads, 256MB cache each

256 GB Mellanox 200 Gb HDR

Load and status page: pleiades.uio.no

Orion

The Orion nodes are available for Sven Wedemeyer's projects. RHEL9

Node CPU type Mem Infiniband
orion 2x AMD EPYC3 7543 2.8GHz, 32core/64 threads, 256MB cache each 512 GB Mellanox 200 Gb HDR

Load and status page: orion.uio.no

Eagle

The Eagle nodes are available for RoCS. Note that eagle4-6 have massive amounts of RAM (2 TB) and should be used for jobs in need of that (e.g. debugging Bifrost with 768^3 grid). Eagle is used for automatic testing from GitHub and should not be used for normal jobs. RHEL9.

Node CPU type Mem Infiniband/ GPU
eagle4-6

2x AMD EPYC 7551, 32-core, 2 GHz

2TB Mellanox 200 Gb HDR
eagle7 2x AMD EPYC2 7302, 16-core, 3 GHz 512 GB 6x NVIDIA Tesla T4 cards (each with 2560 cores / 16 GB memory)
eagle8 2x AMD EPYC2 7742, 64-core, 2,25 GHz 1TB

Mellanox 200 Gb HDR

2x NVIDIA Tesla V100S cards (each with 5120 cores / 32 GB memory)

eagle9

2x AMD EPYC3 74F3 3.2GHz, 24 core/48 threads, 256 MB cache

128 GB

2x AMD Instinct MI100 cards (each with 120 compute cores/ 32 GB memory) 

Load and status page: eagle.uio.no

By Torben Leifsen
Published July 11, 2017 3:59 PM - Last modified Mar. 5, 2024 10:45 AM