Docker Hardware Acceleration - NVIDIA NVENC/NVDEC
Overview
Unmanic itself does not require hardware acceleration, but some plugins can take advantage of NVIDIA NVDEC/NVENC for faster video decoding and encoding.
For example, the Transcode Videos plugin can use NVENC/NVDEC when an NVIDIA GPU is present and correctly configured.
Install the NVIDIA driver first by following the Linux hardware acceleration guide.
1) Install the NVIDIA Container Toolkit
If you intend to use Unmanic inside a Docker container, you will need to pass through the required NVIDIA devices to the container.
These next instructions assume you first have Docker installed. If you don't, do that first and come back.
Install the NVIDIA Container Toolkit and configure Docker to use it (sudo nvidia-ctk runtime configure --runtime=docker).
Once you have followed these steps, you can test that the Unmanic Docker container can use the NVENC hardware by running:
docker run --rm --gpus all --entrypoint="" josh5/unmanic nvidia-smi
You should see the following output:
Sun Apr 17 05:31:44 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.54 Driver Version: 510.54 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------|
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 On | N/A |
| 0% 34C P8 N/A / 120W | 185MiB / 4096MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
2) Create the Docker container
We can now pass the --runtime=nvidia (or runtime: nvidia in compose) as a param when creating a Docker container to have that container granted access to the NVIDIA GPU. We can also set the NVIDIA_VISIBLE_DEVICES to either the ID of a specific GPU or simply to 'all' for the container to have access to all NVIDIA GPUs on the host.
- Docker run
- Docker-compose
PUID=$(id -u)
PGID=$(id -g)
# CONFIG_DIR - Where you settings are saved
CONFIG_DIR=/config
# LIBRARY_DIR - The location/locations of your library
LIBRARY_DIR=/library
# CACHE_DIR - A tmpfs or and folder for temporary conversion files
CACHE_DIR=/tmp/unmanic
# NVIDIA_VISIBLE_DEVICES - The GPUs that will be accessible to the container
NVIDIA_VISIBLE_DEVICES=all
docker run -ti --rm \
--runtime=nvidia \
-e NVIDIA_VISIBLE_DEVICES=${NVIDIA_VISIBLE_DEVICES} \
-e PUID=${PUID} \
-e PGID=${PGID} \
-p 8888:8888 \
-v ${CONFIG_DIR}:/config \
-v ${LIBRARY_DIR}:/library \
-v ${CACHE_DIR}:/tmp/unmanic \
josh5/unmanic:latest
# Variables that will need to be changed:
# <PUID> - User id for folder/file permissions
# <PGID> - Group id for folder/file permissions
# <NVIDIA_VISIBLE_DEVICES> - The GPUs that will be accessible to the container
# Options: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/user-guide.html#gpu-enumeration
# <PATH_TO_CONFIG> - Path where Unmanic will store config files
# <PATH_TO_LIBRARY> - Path where you store the files that Unmanic will scan
# <PATH_TO_ENCODE_CACHE> - Cache path for in-progress encoding tasks
#
---
version: '2.4'
services:
unmanic:
runtime: nvidia # For H/W transcoding using the NVENC encoder
container_name: unmanic
image: josh5/unmanic:latest
ports:
- 8888:8888
environment:
- PUID=<PUID>
- PGID=<PGID>
- NVIDIA_VISIBLE_DEVICES=<NVIDIA_VISIBLE_DEVICES>
volumes:
- <PATH_TO_CONFIG>:/config
- <PATH_TO_LIBRARY>:/library
- <PATH_TO_ENCODE_CACHE>:/tmp/unmanic