How Do Data Rows and Compute Credits Work?

Learn what data rows and compute credits are, what consumes them, and how to plan your resource usage in Datature Vi.

Datature Vi uses two resource currencies: data rows for storage and annotation, and compute credits for GPU training time. Your plan sets a quota for each. This page explains what consumes each resource, how to estimate costs before training, and what happens when you hit a limit.


Two resource types

Resource
What it measures
Resets
Data rows
Storage and annotation volume (images uploaded, annotation pairs created)
Fixed quota per plan
Compute credits
GPU training time (hours of GPU usage weighted by GPU tier)
Monthly reset

Both resources are tracked at the organization level, shared across all projects and team members. You can view current usage in Settings > Resources. See Resource Usage for the full monitoring guide.


What consumes data rows

Deleting content returns data rows

When you delete an image or remove an annotation pair, the data rows tied to that content go back to your available balance immediately. Check Settings > Resources after large cleanups so your quota matches what you expect.

Data rows are consumed when you add content to your datasets.

Action
Data rows consumed
Upload one image
5 data rows
Create one annotation pair (question + answer, phrase + box, or freeform entry)
1 data row

Example: Uploading 100 images with 3 annotation pairs each consumes 100 x 5 + 100 x 3 = 800 data rows.

Deleting images or annotation pairs returns the associated data rows to your quota. Uploads still consume data rows on ingest, so curate before you upload when you can.

  • Curate before uploading. Remove duplicate, blurry, or irrelevant images from your dataset before upload. Each stored image counts as 5 data rows while it stays in the dataset; delete the image to return those rows.
  • Annotate with purpose. Each annotation pair costs 1 data row. Focus on quality annotations that provide clear training signal rather than annotating every possible detail in an image.
  • Test with a small batch first. Upload 20-50 images, train a model, and check results before committing your full dataset.

What consumes compute credits

Compute credits are consumed during training runs. The cost depends on two factors: how long the run takes and which GPU you selected.

Each GPU tier has a multiplier applied to the training duration. These rates match the GPU multiplier table on Resource Usage (the page you use to read live quotas). For multi-GPU jobs, multipliers add per GPU (for example 4× A10G uses 4 × 2.5 = 10.0× per real-time minute).

GPU
Multiplier
Example: 1-hour run
T4
1.0×
60 credits
L4
2.0×
120 credits
A10G
2.5×
150 credits
A100 (40 GB)
4.0×
240 credits
A100 (80 GB)
6.0×
360 credits
H100
12.0×
720 credits

Calculation: Credits consumed = Training duration (minutes) × GPU multiplier (sum all GPUs when you use more than one).

A 2-hour training run on one A10G GPU costs 120 minutes × 2.5 = 300 compute credits.

Compute credits reset monthly. Unused credits from the current month do not carry over.


How to estimate costs before training

Before launching a training run, estimate your compute credit consumption:

1

Check your dataset size

Larger datasets take longer to train. A dataset of 100 images trains faster than one with 1,000 images.

2

Note your training settings

More epochs and larger batch sizes increase training time. Check the defaults in Model Settings for your chosen model.

3

Pick a GPU tier

Start with the recommendation Datature Vi shows for your model size. Smaller GPUs cost fewer credits per minute but may take longer to train.

4

Estimate the duration

A rough guide: 100 images on a 7B model with LoRA on an A10G GPU takes about 1 hour. Scale up from there based on your dataset size and epoch count.

Small experiment: 50 images, Qwen3.5 4B, LoRA, T4 GPU, ~30 minutes = 30 credits.

Medium project: 500 images, Qwen3.5 9B, LoRA, A10G GPU, ~2 hours ≈ 300 credits (120 minutes × 2.5).

Large production run (order of magnitude): 2,000 images, Qwen3.5 27B, full fine-tuning, 8× A100 (40 GB), ~6 hours ≈ 11,500 credits (360 minutes × 32.0 from eight GPUs at 4.0× each).

These are rough estimates. Actual duration depends on image resolution, annotation complexity, and training settings.


What happens when you hit a limit

Resource
Effect
What to do
Data rows exhausted
Cannot upload new images or create new annotations. Existing datasets and models are unaffected.
Upgrade your plan or wait for new quota on Enterprise plans.
Compute credits exhausted
Cannot start new training runs. Running training runs complete normally. Local SDK inference on your hardware is unaffected; hosted deployment may pause if credits are depleted.
Wait for monthly reset or upgrade your plan.

Hitting a limit never deletes your data, models, or training results. You retain full access to everything you've already created.


Frequently asked questions

No. Compute credits reset to your plan's monthly allocation at the start of each billing cycle. Plan your training runs to use your monthly credits before they expire.

Contact the Datature team through your organization's support channel to discuss add-on credits. Enterprise plans offer custom allocations.

Local inference with the Vi SDK on hardware you control does not spend Datature Vi compute credits. Hosted model deployment (serving from Datature Vi's infrastructure) does spend compute credits per minute of serving; see Resource Usage. Training runs always spend credits for GPU time on Datature Vi.

The Free plan includes enough data rows and compute credits to upload a small dataset, train a model with LoRA on a T4 GPU, and test inference. See Plans and Pricing for exact allocations by tier.

Yes. Removed assets and deleted annotation pairs return their data rows to your available balance right away. Confirm the new total on the Resources page. See Resource Usage for the full monitoring guide.


Related resources

Resource Usage

Monitor your data row and compute credit consumption.

Plans and Pricing

Compare plan tiers and feature availability.

Train a Model

Step-by-step guide to launching your first training run.