To list information about the GPU:
[kjetba@mimi ~]$ nvidia-smi Thu Aug 3 11:40:29 2023 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 530.30.02 Driver Version: 530.30.02 CUDA Version: 12.1 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 Quadro RTX 6000 On | 00000000:25:00.0 Off | 0 | | N/A 25C P8 13W / 250W| 3MiB / 23040MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+
To see GPU usage:
[kjetba@mimi ~]$ nvtop Device 0 [Quadro RTX 6000] PCIe GEN 1@16x RX: 0.000 KiB/s TX: 0.000 KiB/s GPU 300MHz MEM 405MHz TEMP 24°C FAN N/A% POW 13 / 250 W GPU[ 0%] MEM[ 0.344Gi/22.500Gi] ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ 100│GPU0 % │ │GPU0 mem% │ │ │ │ │ │ │ 75│ │ │ │ │ │ │ │ │ │ 50│ │ │ │ │ │ │ │ │ │ 25│ │ │ │ │ │ │ │ │ │ 0│──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────│ └65s────────────────────────────48s──────────────────────────────32s─────────────────────────────16s─────────────────────────────0s┘ PID USER DEV TYPE GPU GPU MEM CPU HOST MEM Command
TensorFlow and PyTorch are available as modules on mimi.uio.no:
$ module load TensorFlow/2.7.1-foss-2021b-CUDA-11.4.1
IMPORTANT: TensorFlow will allocate all the GPU memory by default. You should avoid this by adding the following in your code (sharing is caring):
import tensorflow as tf tf.config.experimental.set_memory_growth(gpu, True)
To run Jupyter Lab/Notebook on mimi.uio.no using you local browser, have a look here and here.
To use the GPU for rendering graphics on mimi, please see this page.
If you want to use the GPU for 3D applications, see here.