GPULab¶
GPULab is a distributed system for running jobs in GPU-enabled Docker-containers. GPULab consists out of a set of heterogeneous clusters, each with their own characteristics (GPU model, CPU speed, memory, bus speed, …), allowing you to select the most appropriate hardware. Each job runs isolated within a Docker containers with dedicated CPU’s, GPU’s and memory for maximum performance.
This documentation contains more info on what GPULab is and how to use it.
Tip
Looking for a quick introduction? Have a look at our 'GPULab and JupyterHub introduction' slidedeck
.
For bugreports, questions and feedback:
- E-mail us at gpulab@ilabt.imec.be
- For iLab.t employees only:
- Chat with us on the GPULab Mattermost Channel
- See known bugs and feature requests at the iLab.t GitLab GPULab issues page
Note
The UGent HPC also offers GPU resources: it has multiple GPU clusters available with different generations of NVIDIA GPUs.
It is straightforward to port GPULab jobs to run on the UGent HPC instead: convert the Docker image that you are using to an Apptainer image.
The maximum duration of a job on the HPC is 72 hours.
For more information on how to use these resources, please refer to the HPC documentation on GPU clusters.
Table of Contents