2026-03-24 09:54:31 +00:00
2026-03-25 11:51:37 +00:00
2026-03-25 12:17:52 +00:00
2026-03-04 07:55:16 +00:00
2026-03-04 07:55:16 +00:00
2026-03-22 20:01:37 +00:00
2026-03-22 20:01:37 +00:00
2026-03-04 07:55:16 +00:00
2026-03-24 17:39:14 +00:00
2026-03-22 17:26:16 +00:00
2026-03-22 16:49:28 +00:00
2026-03-25 13:02:19 +00:00
2026-03-04 07:55:16 +00:00
2026-03-04 07:55:16 +00:00
2026-03-04 08:45:59 +00:00
2026-03-22 20:01:37 +00:00
2026-03-05 12:29:57 +00:00
2026-03-10 21:31:31 +00:00
2026-03-05 12:29:57 +00:00
2026-03-23 22:13:15 +00:00
2026-03-04 08:02:58 +00:00
2026-03-04 07:55:16 +00:00
2026-03-10 08:57:54 +00:00

Docker

Preface

PoC Docker stacks for PiHole, Portainer and Traefik etc.
These were run from an Ubuntu workstatiom, utilizing Cloudflared and Cloudfare DNS.

Consider porting to Proxmox LXC containers and/or using Tailscale/Headscale to remove the need
to punch throught the firewall router. Previously, this docker deployment was running behind a firewall
that the author had no access to.

Updated

  • Now hosted on Proxmox Alpha KVM.
  • Use some of the earlier scripts as reference only.
  • Using Docker repository in Github.
  • Not all these Docker containers are required as they now run on LXC.
  • Only for PoC if replicated in LXC.

Reference

https://medium.digitalmirror.uk/how-to-run-deepseek-uncensored-ai-models-locally-remotely-on-every-platform-such-as-docker-2ace545449dc

https://youtu.be/kgWEnryBXQg?si=cl3wrqn0pXeKuaDp
https://youtu.be/y5-6qww8uKk?si=9aPDuxgMi1YeDv57

https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Debian&target_version=13&target_type=deb_network
https://www.nvidia.com/en-us/drivers/details/264101/
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

https://pve.proxmox.com/wiki/PCI(e)_Passthrough
https://pve.proxmox.com/wiki/QEMU/KVM_Virtual_Machines#resource_mapping
https://pve.proxmox.com/wiki/PCI_Passthrough#Introduction

https://www.virtualizationhowto.com/2025/05/run-ollama-with-nvidia-gpu-in-proxmox-vms-and-lxc-containers/

Host Prerequisites

  • Ensure Alpha host has PCI GPU passthrough enabled in BIOS and IOMMU enabled
  • Ensure Secure Boot is disabled (for a good time)
  • Do not need to add drivers or CUDA to host! :-)
  • Added nameservers to resolve.conf
  • Updated network interfaces
  • Added git keys and agent
  • Updated .bashrc etc. to check agent and load keys
  • Set hostname
  • Added PCI Resource Mappings in Proxmox for the Cluster level. The docker VM can then use this mapping at HA level

Resource

  • Blacklisted host drivers on host

/etc/modprobe.d/blacklist.conf

blacklist nvidia*
blacklist nouveau

Update kernel library

update-initramfs -u -k all
  • Add kernel modules on host (older kernels)

          # /etc/modules is obsolete and has been replaced by /etc/modules-load.d/.
          # Please see modules-load.d(5) and modprobe.d(5) for details.
          #
          # Updating this file still works, but it is undocumented and unsupported.
    
          MODULES-LOAD.D(5)   modules-load.d    MODULES-LOAD.D(5)
    
          NAME
                 modules-load.d - Configure kernel modules to load at boot
    
          SYNOPSIS
                     /etc/modules-load.d/*.conf
                     /run/modules-load.d/*.conf
                     /usr/local/lib/modules-load.d/*.conf
                     /usr/lib/modules-load.d/*.conf
    

modules seem top be loaded automatically in later kernels in some new library mechanism:

  /lib/modules/6.17.13-1-pve/modules.order
  /lib/modules/6.17.13-1-pve/modules.softdep
  /lib/modules/6.17.13-1-pve/modules.dep

There is no need to add these entries in /etc/modules in later kernels

vfio
vfio_iommu_type1
vfio_pci

Normally would do an update if the modules needed to be manually added

update-initramfs -u -k all

verificatiobn lsmod | grep vfio returns

  vfio_pci               20480  2
  vfio_pci_core          90112  1 vfio_pci
  vfio_iommu_type1       49152  1
  vfio                   65536  9 vfio_pci_core,vfio_iommu_type1,vfio_pci
  iommufd               126976  1 vfio
  irqbypass              16384  2 vfio_pci_core,kvm  


pvesh get /nodes/alpha/hardware/pci --pci-class-blacklist ""  

┌──────────┬────────┬──────────────┬────────────┬────────┬─────────────────────────────────────────────────────────┬──────┬──────────────────┬──────────────────────────────────┬──────────────────┬───────────────────────
│ class    │ device │ id           │ iommugroup │ vendor │ device_name                                             │ mdev │ subsystem_device │ subsystem_device_name            │ subsystem_vendor │ subsystem_vendor_name 
╞══════════╪════════╪══════════════╪════════════╪════════╪═════════════════════════════════════════════════════════╪══════╪══════════════════╪══════════════════════════════════╪══════════════════╪═══════════════════════
│ 0x010601 │ 0x43eb │ 0000:02:00.1 │          0 │ 0x1022 │ 500 Series Chipset SATA Controller                      │      │ 0x1062           │ ASM1062 Serial ATA Controller    │ 0x1b21           │ ASMedia Technology Inc
├──────────┼────────┼──────────────┼────────────┼────────┼─────────────────────────────────────────────────────────┼──────┼──────────────────┼──────────────────────────────────┼──────────────────┼───────────────────────
│ 0x010802 │ 0xa80a │ 0000:01:00.0 │          0 │ 0x144d │ NVMe SSD Controller PM9A1/PM9A3/980PRO                  │      │ 0xa801           │ SSD 980 PRO                      │ 0x144d           │ Samsung Electronics Co
├──────────┼────────┼──────────────┼────────────┼────────┼─────────────────────────────────────────────────────────┼──────┼──────────────────┼──────────────────────────────────┼──────────────────┼───────────────────────
│ 0x010802 │ 0xa80a │ 0000:04:00.0 │          0 │ 0x144d │ NVMe SSD Controller PM9A1/PM9A3/980PRO                  │      │ 0xa801           │ SSD 980 PRO                      │ 0x144d           │ Samsung Electronics Co
├──────────┼────────┼──────────────┼────────────┼────────┼─────────────────────────────────────────────────────────┼──────┼──────────────────┼──────────────────────────────────┼──────────────────┼───────────────────────
│ 0x020000 │ 0x8125 │ 0000:07:00.0 │          0 │ 0x10ec │ RTL8125 2.5GbE Controller                               │      │ 0x0123           │                                  │ 0x10ec           │ Realtek Semiconductor 
├──────────┼────────┼──────────────┼────────────┼────────┼─────────────────────────────────────────────────────────┼──────┼──────────────────┼──────────────────────────────────┼──────────────────┼───────────────────────
│ 0x020000 │ 0x8125 │ 0000:08:00.0 │          0 │ 0x10ec │ RTL8125 2.5GbE Controller                               │      │ 0x0123           │                                  │ 0x10ec           │ Realtek Semiconductor 
├──────────┼────────┼──────────────┼────────────┼────────┼─────────────────────────────────────────────────────────┼──────┼──────────────────┼──────────────────────────────────┼──────────────────┼───────────────────────
│ 0x020000 │ 0x8125 │ 0000:0b:00.0 │          0 │ 0x10ec │ RTL8125 2.5GbE Controller                               │      │ 0x8125           │                                  │ 0x1849           │ ASRock Incorporation  
├──────────┼────────┼──────────────┼────────────┼────────┼─────────────────────────────────────────────────────────┼──────┼──────────────────┼──────────────────────────────────┼──────────────────┼───────────────────────
│ 0x030000 │ 0x1d01 │ 0000:0c:00.0 │          2 │ 0x10de │ GP108 [GeForce GT 1030]                                 │      │ 0x85f4           │                                  │ 0x1043           │ ASUSTeK Computer Inc. 
├──────────┼────────┼──────────────┼────────────┼────────┼─────────────────────────────────────────────────────────┼──────┼──────────────────┼──────────────────────────────────┼──────────────────┼───────────────────────
│ 0x040300 │ 0x0fb8 │ 0000:0c:00.1 │          2 │ 0x10de │ GP108 High Definition Audio Controller                  │      │ 0x85f4           │                                  │ 0x1043           │ ASUSTeK Computer Inc. 
  • Updated Grub on host with iommu setting
  • Updated ssh users
  • Added david to sudo group
  • Install git curl and vim
  • Add git and windows ssh keys

VM Prerequisites

  • When creating docker VM, set BIOS to SeaBIOS with Q35 Machine type if you are using older GPUs like the GTX 1030 (Pascal architecture)

  • When created Docker VM install these prereqs:

    apt install git
    apt install vim
    apt install curl
    apt update && apt upgrade
    
    apt install build-essential -y
    apt-get install software-properties-common # debian have highlighted this as bug and not included the package in their latest Trixie distro
    apt-get install linux-headers-$(uname -r) -y
    apt install ca-certificates curl  
    

Installation on Guest VM

Order is important. Docker first then drivers.

Resource

1. Docker

install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
chmod a+r /etc/apt/keyrings/docker.asc

tee /etc/apt/sources.list.d/docker.sources <<EOF
Types: deb
URIs: https://download.docker.com/linux/debian
Suites: $(. /etc/os-release && echo "$VERSION_CODENAME")
Components: stable
Signed-By: /etc/apt/keyrings/docker.asc
EOF

apt update
apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
systemctl status docker
systemctl enable docker
docker run hello-world
groupadd docker
usermod -aG docker david

2. GPU drivers & CUDA

a) Install nVidia CUDA toolkit (optional)

 wget https://developer.download.nvidia.com/compute/cuda/repos/debian13/x86_64/cuda-keyring_1.1-1_all.deb
 dpkg -i cuda-keyring_1.1-1_all.deb
 apt-get update
 apt-get -y install cuda-toolkit-13-1
  • Ensure /tmp is about 5Gb
  • Ensure secure boot is off in the VM also

EFI

EFI1

Secure

b) Install latest nVidia Drivers (not always best for legacy cards)

 #apt-get install -y cuda-drivers

c) Container toolkit prereqs

apt-get update && sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg2
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
apt-get update

d) Install nVidia Container toolkit

export NVIDIA_CONTAINER_TOOLKIT_VERSION=1.18.2-1
apt-get install -y nvidia-container-toolkit=${NVIDIA_CONTAINER_TOOLKIT_VERSION} nvidia-container-toolkit-base=${NVIDIA_CONTAINER_TOOLKIT_VERSION} libnvidia-container-tools=${NVIDIA_CONTAINER_TOOLKIT_VERSION} libnvidia-container1=${NVIDIA_CONTAINER_TOOLKIT_VERSION}
nvidia-smi

e) Install correct driver v580 for the GeForce GT1030

chown root:root NVIDIA-Linux-x86_64-580.126.18.run
chmod +x NVIDIA-Linux-x86_64-580.126.18.run
./NVIDIA-Linux-x86_64-580.126.18.run

# Alternatively
apt install -y nvidia-driver-580  

f) Test

nvidia-smi
systemctl restart docker
docker run --gpus all nvidia/cuda:12.1.1-runtime-ubuntu22.04 nvidia-smi

12.1.1-runtime-ubuntu22.04: Pulling from nvidia/cuda  

2a5ee6fadd42: Pull complete 
aece8493d397: Pull complete 
dd4939a04761: Pull complete 
b0d7cc89b769: Pull complete 
1532d9024b9c: Pull complete 
04fc8a31fa53: Pull complete 
a14a8a8a6ebc: Pull complete 
7d61afc7a3ac: Pull complete 
8bd2762ffdd9: Pull complete 
Digest: sha256:8bbc6e304b193e84327fa30d93eea70ec0213b808239a46602a919a479a73b12
Status: Downloaded newer image for nvidia/cuda:12.1.1-runtime-ubuntu22.04

==========
== CUDA ==
==========

CUDA Version 12.1.1

Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

Tue Mar 10 08:40:45 2026       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.126.18             Driver Version: 580.126.18     CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GT 1030         Off |   00000000:01:00.0 Off |                  N/A |
| N/A   32C    P0            N/A  /   30W |       0MiB /   2048MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+
S
Description
No description provided
Readme 5.6 MiB
Languages
Shell 63.8%
Dockerfile 36.1%