Software compatibility by GPU architecture

GPULab contains a variety of GPUs, which are based on different NVIDIA architectures and have different compute capabilities (CC).

Some software packages need to be compiled with support for a specific architecture in order to be able to run on it. This page contains information on the compatibility of the different architectures present in GPULab with popular software packages.

Overview of GPU architectures

Below you find a list of the different GPUs available in GPULab, grouped by architecture and with their compute capabilities version noted.

The compute capability version was retrieved from the NVIDIA CUDA GPUs and Legacy GPUs pages.

  • Pascal architecture:
    • Cluster 1,4: NVIDIA GTX 1080 Ti (Compute Capability 6.1)
    • Cluster 3: NVIDIA GTX 2080 Ti (Compute Capability 7.5)
  • Volta architecture:
    • Cluster 6,103: NVIDIA Tesla V100 (Compute Capability 7.0)
  • Ada Lovelace architecture:
    • Cluster 5: NVIDIA RTX 4090 (Compute Capability 8.9)
    • Cluster 102: NVIDIA RTX 4000 (Compute Capability 8.9)
  • Ampere architecture:
    • Cluster 8: NVIDIA A40 (Compute Capability 8.6)
    • Cluster 10: NVIDIA RTX A6000 (Compute Capability 8.6)
    • Cluster 101: NVIDIA A100 (Compute Capability 8.0)
  • Hopper architecture:
    • Cluster 11: NVIDIA H200 (Compute Capability 9.0)
  • Blackwell architecture:
    • Cluster 9: NVIDIA RTX 5090 (Compute Capability 12.0)

Software compatibility

CUDA toolkit compatibility

With the release of CUDA 13, NVIDIA has deprecated support for the Pascal and Volta architectures.

Architecture support for Maxwell, Pascal, and Volta is considered feature-complete. Offline compilation and library support for these architectures have been removed in CUDA Toolkit 13.0 major version release. The use of CUDA Toolkits through the 12.x series to build applications for these architectures will continue to be supported, but newer toolkits will be unable to target these architectures.

Source: NVIDIA CUDA 13 Release Notes

cuDNN compatibility

The official NVIDIA cuDNN Support Matrix indicates which architectures are supported by which cuDNN versions.

Most notably:

  • support for 1080Ti (CC 6.1) was dropped in cuDNN 9.12.0. You need to use cuDNN 9.11.1 or earlier in order to be able to use this GPU.
  • support for 5090 (CC 12.0) was added in cuDNN 9.7.0. You need to use cuDNN 9.7.0 or later in order to be able to use this GPU.

Compatibility with PyTorch

You can verify which Compute Capabilities are supported by your PyTorch installation by running the following code:

$ python -c "import torch; print(torch.cuda.get_arch_list())"
['sm_37', 'sm_50', 'sm_60', 'sm_70', 'sm_75', 'sm_80']

An unofficial list of supported architectures for different PyTorch versions can be found here: https://github.com/moi90/pytorch_compute_capabilities/blob/main/table_pip.md.

Most notably:

  • support for the 1080Ti (CC 6.1) was dropped in PyTorch 2.8 . You need to use PyTorch 2.7 or earlier in order to be able to use this GPU.
  • support for the 5090 (CC 12.0) was added in PyTorch 2.8. You need to use PyTorch 2.8 or later in order to be able to use this GPU.

Compatibility with TensorFlow

You can verify which Compute Capabilities TensorFlow was compiled against by running the following code:

$ python -c "import tensorflow as tf; print(tf.sysconfig.get_build_info()['cuda_compute_capabilities'])"
['sm_60', 'sm_70', 'sm_80', 'sm_89', 'compute_90']

Furthermore, the cuDNN version TensorFlow was compiled against can be checked with:

$ python -c "import tensorflow as tf; print(tf.sysconfig.get_build_info()['cudnn_version'])"
9

Compatibility of JupyterHub images

We maintain a set of Jupyter Docker images with support for GPU computing, which can be used on our JupyterHub instance. These are built weekly, and always include the latest stable releases of popular deep learning frameworks such as PyTorch and TensorFlow.

Every image is tagged with the CUDA version it was built with, and the version of the deep learning frameworks it contains. In the image selector on our JupyterHub spawn page, the -latest tag is used, indicating that it will always use the most recently built image for that framework.

You can find a full list of the archive of available images and their tags on GPU Docker Stacks: Container Registry. There you can find older images which are compatible with older GPU architectures. The Jupyter project also maintains a list of the available images on Jupyter Docker Stacks: Container Registry.

For example: the image gitlab.ilabt.imec.be:4567/ilabt/gpu-docker-stacks/pytorch-notebook:cuda12-pytorch-2.4.0 contains PyTorch 2.4.0 and was built with CUDA 12. It can thus be used on our older 1080 GPUs, but does not support the RTX5090 with the newer Blackwell architecture.