Installation

The Vi SDK runs on Python 3.10 through 3.14 on Linux, macOS, and Windows. This page covers installation options, virtual environment setup, GPU configuration, and troubleshooting common issues.

Requirements

Requirements

Requirement
Details
Python
3.10, 3.11, 3.12, 3.13, or 3.14
pip
Included with Python
OS
Linux, macOS, or Windows
Internet
Required for API access and package installation

Check your Python version before installing:

Terminal
python --version # or python3 --version

The output should show Python 3.10 or higher.


Basic installation

Terminal
pip install vi-sdk

The base install includes API functionality for:

  • Dataset management
  • Asset upload and download
  • Annotation workflows
  • Model operations

Optional feature sets

Install additional features as your workflow requires them.

Inference support

For running inference locally with vision-language models (Qwen2.5-VL, InternVL 3.5, Cosmos Reason1, NVILA):

Terminal
pip install vi-sdk[inference]

Jupyter notebook support

For proper rich formatting and progress tracking in Jupyter notebooks:

Terminal
pip install vi-sdk[jupyter]

Deployment support

For deploying models with NVIDIA NIM containers:

Terminal
pip install vi-sdk[deployment]

All features

Terminal
pip install vi-sdk[all]

Virtual environment setup

A virtual environment keeps Vi SDK's dependencies separate from other Python projects. This prevents version conflicts and makes your environment reproducible.

Terminal
# Create the environment python3 -m venv vi-env # Activate (Linux/macOS) source vi-env/bin/activate # Activate (Windows) vi-env\Scripts\activate # Install Vi SDK pip install vi-sdk[all]

venv is included with Python and needs no additional install.

Which Tool Should I Use?
  • venv: Built in, no setup needed. Good default.
  • conda: Popular in data science. Manages Python versions directly.
  • uv: Fastest install speeds. Good for large dependency trees like inference packages.
  • virtualenvwrapper: Adds convenient shortcuts for managing multiple environments.

GPU support for inference

A GPU is not required for the base SDK or dataset operations, but it makes a significant difference for local inference. CPU inference works for development and small-scale testing.

NVIDIA GPUs (CUDA)

Terminal
# Install PyTorch with CUDA 11.8 pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118 # Then install Vi SDK with inference pip install vi-sdk[inference]

Verify CUDA is available:

import torch

print(f"CUDA available: {torch.cuda.is_available()}")
print(f"CUDA version: {torch.version.cuda}")
print(f"GPU count: {torch.cuda.device_count()}")
if torch.cuda.is_available():
    print(f"GPU name: {torch.cuda.get_device_name(0)}")

Apple Silicon (MPS)

PyTorch detects and uses Metal Performance Shaders automatically on Apple Silicon Macs with macOS 12.3+:

Terminal
pip install vi-sdk[inference]
import torch

print(f"MPS available: {torch.backends.mps.is_available()}")
print(f"MPS built: {torch.backends.mps.is_built()}")

CPU only

Terminal
# CPU-only PyTorch pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu # Then Vi SDK with inference pip install vi-sdk[inference]

Performance reference:

Hardware performance reference

Hardware
Use case
NVIDIA GPU
Production inference (10-100x faster than CPU)
Apple Silicon (MPS)
Development and testing (good performance)
CPU
Small-scale inference and development

Verifying the install

import vi

print(f"Vi SDK version: {vi.__version__}")

from vi import Client
from vi.dataset.loaders import ViDataset
print("Core modules loaded.")
Expected output
Output
Vi SDK version: 0.1.0 Core modules loaded.

Platform-specific setup

Linux

Terminal
sudo apt-get update sudo apt-get install python3 python3-pip python3-venv # Optional: build tools sudo apt-get install build-essential python3-dev

macOS

Terminal
# Install Homebrew if needed /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" # Install Python brew install [email protected] # Verify python3 --version

Windows

1

Download Python from python.org

Download from python.org

2

Run the installer

3

Check "Add Python to PATH" during installation

4

Verify in Command Prompt

Command Prompt
python --version pip --version

Upgrading

Terminal
# Upgrade core SDK pip install --upgrade vi-sdk # Upgrade with all features pip install --upgrade vi-sdk[all]

Check your current version:

import vi
print(vi.__version__)

Uninstalling

Terminal
pip uninstall vi-sdk

To remove everything including the virtual environment:

Terminal
# Deactivate first deactivate # Remove the environment directory rm -rf vi-env # Linux/macOS rmdir /s vi-env # Windows

Troubleshooting

Symptom: Import fails after installation.

Likely cause: Vi SDK was installed in a different Python environment from the one you're running.

Fix:

Terminal
# Check which environment has vi-sdk pip show vi-sdk # Install in the current Python interpreter python -m pip install vi-sdk # Confirm which Python you're using which python # Linux/macOS where python # Windows

Symptom: Errors during pip install about conflicting package versions.

Fix: Create a fresh virtual environment:

Terminal
python -m venv fresh-env source fresh-env/bin/activate pip install --upgrade pip setuptools wheel pip install vi-sdk[all]

Symptom: SSL errors during pip install.

Fix:

Terminal
# Update certificates pip install --upgrade certifi # Temporary workaround (use only on trusted networks) pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org vi-sdk

Symptom: Permission denied during installation.

Fix: Use a virtual environment or the --user flag:

Terminal
# Option 1: user install pip install --user vi-sdk # Option 2: virtual environment (preferred) python3 -m venv vi-env source vi-env/bin/activate pip install vi-sdk

Do not use sudo pip. It can break system Python packages.

Symptom: PyTorch is installed but torch.cuda.is_available() returns False.

Fix:

Terminal
# Check that your NVIDIA driver is installed nvidia-smi # Reinstall PyTorch for your CUDA version pip uninstall torch torchvision pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118 # Verify python -c "import torch; print(torch.cuda.is_available())"

Common causes: NVIDIA drivers not installed, wrong CUDA version for the PyTorch build, or missing CUDA toolkit.

Symptom: RuntimeError: CUDA out of memory when running inference.

Fix: Use quantization to reduce GPU memory use:

from vi.inference import ViModel

model = ViModel(
    secret_key="your-key",
    organization_id="your-org",
    run_id="your-run",
    load_in_4bit=True  # 4-bit quantization reduces VRAM usage
)

# Also try reducing batch size
results = model(
    source=images,
    user_prompt="...",
    batch_size=1
)

See the quantization concepts page for trade-offs between 4-bit and 8-bit quantization.


Next steps

Getting Started

Run your first SDK call: authenticate, list datasets, and run inference.

Create A Secret Key

Generate API credentials for authenticating SDK requests.

Vi SDK Overview

Full feature list, resource hierarchy, and error handling patterns.