GUI to run DeepLabCut-Live on a video feed, preview and record from one or multiple cameras, and optionally record external timestamps and processor outputs.
The GUI has been modernized and is now built with PySide6 (Qt) (replacing tkinter).
The new interface supports multi-camera preview with a tiled display, PyTorch models, and features improved interactive workflows for experimental use.
Find the full documentation at the DeepLabCut docs website
- Python 3.10, 3.11 or 3.12
- One inference backend (choose at least one):
- PyTorch (recommended for best performance & compatibility)
- TensorFlow (for backwards compatibility with existing models; Windows installs are no longer available for Python > 3.10)
- A supported camera backend (OpenCV webcams by default; additional backends supported)
While the previous deeplabcut-live-gui is available on PyPI, the newest PySide6-based GUI and features must be obtained by installing from source.
To get the latest version, please follow the instructions below.
git clone https://github.com/DeepLabCut/DeepLabCut-live-GUI.git
cd DeepLabCut-live-GUIuv venv dlclivegui
# Linux/macOS:
source dlclivegui/bin/activate
# Windows (Command Prompt):
.\dlclivegui\Scripts\activate.bat
# Windows (PowerShell):
.\dlclivegui\Scripts\Activate.ps1You may install PyTorch or TensorFlow extras (or both), but you must choose at least one to run inference.
- PyTorch (recommended):
uv pip install -e .[pytorch]- TensorFlow (backwards compatibility):
uv pip install -e .[tf]conda create -n dlclivegui python=3.12
conda activate dlclivegui- PyTorch (recommended):
pip install -e .[pytorch]- TensorFlow:
pip install -e .[tf]Tip
For GPU/CUDA support specifics and framework compatibility, please follow the official PyTorch/TensorFlow install guidance for your OS and drivers.
After installation, start the application with:
dlclivegui # in conda/mamba
# OR:
uv run dlcliveguiImportant
Activate your venv/conda environment before launching so the GUI can access installed dependencies.
The new GUI supports one or more cameras.
Typical workflow:
- Configure Cameras (choose backend and devices)
- Adjust camera settings (serial, exposure, ROI/cropping, etc.)
- Start Preview
- Adjust visualization settings (keypoint color map, bounding boxes, etc.)
- Start inference
- Choose a DeepLabCut Live model
- Choose which camera to run inference on (currently one at a time)
- Start recording
- Adjust recording settings (codec, output format, etc.)
- Record video and timestamps to organized session folders
Note
OpenCV-compatible cameras (USB webcams, OBS virtual camera) work out of the box. For additional camera ecosystems (Basler, GenTL, Aravis, etc.), see the relevant documentation.
- Pose inference runs on one selected camera at a time (even in multi-camera mode)
- Camera features support and availability depends on backend capabilities and hardware
- OpenCV controls for resolution/FPS are best-effort and device-driver dependent
- DeepLabCut-Live models must be exported and compatible with the chosen backend
- Performance depends on resolution, frame rate, GPU availability, and codec choice
If you use this code, we kindly ask you to please cite:
- Kane et al., eLife 2020
- If preferred, see the Preprint
This project is under active development — feedback from real experimental use is highly valued.
Please report issues, suggest features, or contribute here on GitHub.
