Links

Training a model

Description

This is a label for your convenience that will be shown in the AI pipelines table. This will help you distinguish between multiple versions.

Choose classes to include

Select one or more classes that you created to include in training. Any classes you do not check here will be excluded from training. The labels included will be the intersection between the files that were marked as completed, shown in Included images, and the classes that are included.

Create a test set

In this section, you will create a set of data that will not be trained on, and used only to assess performance. This is often called a test set or hold-out set. This is important to realistically evaluate the performance you will get during analysis time. You can read more about training and test sets here.
In your first few iterations, it's more important to use all of your training data, as this won't be your final model. We recommend not using any tiles for testing until your model performance starts to look very good.

Errors and warnings

Errors: This section shows any items that are required to be resolved before you can submit. One example of an error is having too few annotations. We require more than 50 annotations for each AI training.
Warnings: Indicates items we strongly recommend you resolve to before submitting the training job for best results. These items will not stop you from submitting the training job.

Supported model types

We support the following model types:
  • Instance segmentation (classify, segment, and separate each type of object within an image)
  • Semantic segmentation (classify and segment out object regions within an image)
We use state-of-the-art models for our segmentation models, transfer learning off an optimal dataset. Our architectures are usually based off of vision transformer bases.

Training run performance

Biodock allows you to evaluate a training run's performance quantitatively and statistically after a model has been trained. You can see these from doing evaluation on a trained model (see Train and evaluate), or by navigating to AI Projects -> Active AI models or All AI models -> Click on the version that you would like to open.

Training details

The training details section helps you get a broad picture of the AI pipeline and the data it was trained on. You can review information such as total images, included images, average number of images, included classes, and errors or warnings.

Qualitative performance

The qualitative performance tab allows you to see, qualitatively, how well the model performed on your test image and your training images. To toggle between the two, simply select on the toggle between Test images and Train images.
You'll immediately see the image viewer, which will allow you to toggle between different images and see objects that were predicted with AI.

Thresholds

For each object, your AI models output a confidence score. The threshold controls the minimum score needed to be included in the output. Drag the slider up and down to control which objects are displayed - a lower threshold will output more objects, while a higher threshold will output fewer objects. Then, to lock in your threshold for use during Analysis, click Update analysis threshold.

Choose visible classes

Use the class selector and Show outlines in order to toggle on and off outlines to help you look at how the segmentation did.

Statistical performance

The statistical performance section shows a table with summary statistics about how the model performed on both the train and test set. These metrics are especially useful when trying to compare performance between different models.