Training and Evaluation

Understand how Datature Vi trains vision-language models, what each setting controls, and how to measure your model's performance.

Datature Vi fine-tunes vision-language models (VLMs) on your annotated data. This section covers each stage of the training-to-inference pipeline: how to write system prompts that guide your model, what training settings control, how LoRA and quantization reduce cost, what evaluation metrics tell you, and how inference generates output.


In this section


Next steps

Annotation Guide

Create effective training data with good annotation practices.

Configure Your Model

Set up training parameters and start a training run.

Deployment

Download your trained model or deploy it with the Vi SDK or NVIDIA NIM.