Deployment and Resources

Understand how to deploy your trained VLM and how Datature Vi measures resource consumption.

After training, you need a way to run your model in production and a way to manage the resources that power it. This section covers your deployment options and how Datature Vi tracks storage and compute usage.


In this section

How Do I Deploy My Trained Model?

Compare Vi SDK, NVIDIA NIM, and self-hosted options for production deployment.

Data Rows and Compute Credits

What consumes each resource, how to estimate costs, and what happens at limits.


Next steps

Download a Model

Export your trained model weights for local or self-hosted deployment.

Plans and Pricing

See what each plan includes and how resources scale.

Vi SDK Reference

Run inference, manage datasets, and automate workflows with the Python SDK.