Training run performance
Biodock allows you to evaluate a training run's performance quantitatively and statistically after a model has been trained. You can see these from doing evaluation on a trained model (see Train and evaluate), or by navigating to Create AI -> Active AI models or All AI models -> Click on the version that you would like to open.
The training details section helps you get a broad picture of the AI pipeline and the data it was trained on. You can review information such as total images, included images, average number of images, included classes, and errors or warnings.
The qualitative performance tab allows you to see, qualitatively, how well the model performed on your test image and your training images. To toggle between the two, simply select on the toggle between Test images and Train images.
You'll immediately see the image viewer, which will allow you to toggle between different images and see objects that were predicted with AI.
For each object, your AI models output a confidence score. The threshold controls the minimum score needed to be included in the output. Drag the slider up and down to control which objects are displayed - a lower threshold will output more objects, while a higher threshold will output fewer objects. Then, to lock in your threshold for use during Analysis, click Update analysis threshold.
Use the class selector and Show outlines in order to toggle on and off outlines to help you look at how the segmentation did.
The statistical performance section shows a table with summary statistics about how the model performed on both the train and test set. These metrics are especially useful when trying to compare performance between different models.