The Vi SDK runs on Python 3.10 through 3.14 on Linux, macOS, and Windows. This page covers installation options, virtual environment setup, GPU configuration, and troubleshooting common issues.
Requirements
Requirements
Requirement
Details
Python
3.10, 3.11, 3.12, 3.13, or 3.14
pip
Included with Python
OS
Linux, macOS, or Windows
Internet
Required for API access and package installation
Check your Python version before installing:
Terminal
python --version
# or
python3 --version
The output should show Python 3.10 or higher.
Basic installation
Terminal
pip install vi-sdk
The base install includes API functionality for:
Dataset management
Asset upload and download
Annotation workflows
Model operations
Optional feature sets
Install additional features as your workflow requires them.
Inference support
For running inference locally with vision-language models (Qwen2.5-VL, InternVL 3.5, Cosmos Reason1, NVILA):
Terminal
pip install vi-sdk[inference]
Jupyter notebook support
For proper rich formatting and progress tracking in Jupyter notebooks:
Terminal
pip install vi-sdk[jupyter]
Deployment support
For deploying models with NVIDIA NIM containers:
Terminal
pip install vi-sdk[deployment]
All features
Terminal
pip install vi-sdk[all]
Virtual environment setup
A virtual environment keeps Vi SDK's dependencies separate from other Python projects. This prevents version conflicts and makes your environment reproducible.
venv is included with Python and needs no additional install.
Which Tool Should I Use?
venv: Built in, no setup needed. Good default.
conda: Popular in data science. Manages Python versions directly.
uv: Fastest install speeds. Good for large dependency trees like inference packages.
virtualenvwrapper: Adds convenient shortcuts for managing multiple environments.
GPU support for inference
A GPU is not required for the base SDK or dataset operations, but it makes a significant difference for local inference. CPU inference works for development and small-scale testing.
NVIDIA GPUs (CUDA)
Terminal
# Install PyTorch with CUDA 11.8
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
# Then install Vi SDK with inference
pip install vi-sdk[inference]
# Upgrade core SDK
pip install --upgrade vi-sdk
# Upgrade with all features
pip install --upgrade vi-sdk[all]
Check your current version:
import vi
print(vi.__version__)
Uninstalling
Terminal
pip uninstall vi-sdk
To remove everything including the virtual environment:
Terminal
# Deactivate first
deactivate
# Remove the environment directory
rm -rf vi-env # Linux/macOS
rmdir /s vi-env # Windows
Troubleshooting
Symptom: Import fails after installation.
Likely cause: Vi SDK was installed in a different Python environment from the one you're running.
Fix:
Terminal
# Check which environment has vi-sdk
pip show vi-sdk
# Install in the current Python interpreter
python -m pip install vi-sdk
# Confirm which Python you're using
which python # Linux/macOS
where python # Windows
Symptom: Errors during pip install about conflicting package versions.
Do not use sudo pip. It can break system Python packages.
Symptom: PyTorch is installed but torch.cuda.is_available() returns False.
Fix:
Terminal
# Check that your NVIDIA driver is installed
nvidia-smi
# Reinstall PyTorch for your CUDA version
pip uninstall torch torchvision
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
# Verify
python -c "import torch; print(torch.cuda.is_available())"
Common causes: NVIDIA drivers not installed, wrong CUDA version for the PyTorch build, or missing CUDA toolkit.
Symptom:RuntimeError: CUDA out of memory when running inference.