Training a model

Description

This is a label for your convenience that will be shown in the AI pipelines table. This will help you distinguish between multiple versions.

Choose classes to include

Select one or more classes that you created to include in training. Any classes you do not check here will be excluded from training.

Create a test set

You can customize which tiles you'd like to use for training vs test. Test images are not trained on, and used only to assess performance. This is important to realistically evaluate the performance you will get during analysis time.

In your first few iterations, it's more important to use all of your training data, as this won't be your final model. We recommend not using any tiles for testing until your model performance starts to look very good.

Supported model types

We support the following model types:

  • Instance segmentation (classify, segment, and separate each type of object within an image)

  • Semantic segmentation (classify and segment out object regions within an image)

We use state-of-the-art models for our segmentation models, transfer learning off an optimal dataset. Our architectures are usually based off of vision transformer bases.

Training run performance

Biodock allows you to evaluate a training run's performance quantitatively and statistically after a model has been trained. You can see these from doing evaluation on a trained model (see Label and train), or by navigating to AI Projects -> Active AI models or All AI models -> Click on the version that you would like to open.

Training details

The training details section helps you get a broad picture of the AI pipeline and the data it was trained on. You can review information such as total images, included images, average number of images, included classes, and errors or warnings.

Qualitative performance

The qualitative performance tab allows you to see, qualitatively, how well the model performed on your test image and your training images. To toggle between the two, simply select on the toggle between Test images and Train images.

You'll immediately see the image viewer, which will allow you to toggle between different images and see objects that were predicted with AI.

Thresholds

For each object, your AI models output a confidence score. The threshold controls the minimum score needed to be included in the output. Drag the slider up and down to control which objects are displayed - a lower threshold will output more objects, while a higher threshold will output fewer objects. Then, to lock in your threshold for use during Analysis, click Update analysis threshold.

Choose visible classes

Use the class selector and Show outlines in order to toggle on and off outlines to help you look at how the segmentation did.

Statistical performance

The statistical performance section shows a table with summary statistics about how the model performed on both the train and test set. These metrics are especially useful when trying to compare performance between different models.

Last updated